[jira] [Commented] (DRILL-5712) Update the pom files with dependency exclusions for commons-codec

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167228#comment-16167228
 ] 

ASF GitHub Bot commented on DRILL-5712:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/903


> Update the pom files with dependency exclusions for commons-codec
> -
>
> Key: DRILL-5712
> URL: https://issues.apache.org/jira/browse/DRILL-5712
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Reporter: Sindhuri Ramanarayan Rayavaram
>Assignee: Sindhuri Ramanarayan Rayavaram
>
> In java-exec, we are adding a dependency for commons-codec of version 1.10. 
> Other dependencies like hadoop-common, parquet-column etc are trying to 
> download different versions for common codec. Exclusions should be added for 
> common-codec in these dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167203#comment-16167203
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r139045903
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggregator.java
 ---
@@ -47,10 +47,7 @@
   // OK - batch returned, NONE - end of data, RESTART - call again
   public enum AggIterOutcome { AGG_OK, AGG_NONE, AGG_RESTART }
 
-  public abstract void setup(HashAggregate hashAggrConfig, HashTableConfig 
htConfig, FragmentContext context,
- OperatorStats stats, OperatorContext 
oContext, RecordBatch incoming, HashAggBatch outgoing,
- LogicalExpression[] valueExprs, 
List valueFieldIds, TypedFieldId[] keyFieldIds,
- VectorContainer outContainer) throws 
SchemaChangeException, IOException, ClassTransformationException;
+  public abstract void setup(HashAggregate hashAggrConfig, HashTableConfig 
htConfig, FragmentContext context, OperatorStats stats, OperatorContext 
oContext, RecordBatch incoming, HashAggBatch outgoing, LogicalExpression[] 
valueExprs, List valueFieldIds, TypedFieldId[] keyFieldIds, 
VectorContainer outContainer, int extraRowBytes) throws SchemaChangeException, 
IOException, ClassTransformationException;
--- End diff --

That was one of the IDE's ideas 
And simplification could be done as part of future cleanup work (like 
DRILL-5779)


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> 

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167202#comment-16167202
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r139045744
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
@@ -1335,7 +1470,7 @@ private void updateStats(HashTable[] htables) {
 }
 if ( rowsReturnedEarly > 0 ) {
   stats.setLongStat(Metric.SPILL_MB, // update stats - est. total MB 
returned early
-  (int) Math.round( rowsReturnedEarly * estRowWidth / 1024.0D / 
1024.0));
+  (int) Math.round( rowsReturnedEarly * estOutputRowWidth / 
1024.0D / 1024.0));
--- End diff --

Work will be done later as part of DRILL-5779 


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745
>   Caused By (org.apache.drill.exec.exception.OutOfMemoryException) Unable to 
> allocate buffer of size 4194304 due to memory limit. Current allocation: 
> 534773760
> org.apache.drill.exec.memory.BaseAllocator.buffer():238
> 

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167187#comment-16167187
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r139045329
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
@@ -545,16 +584,19 @@ public AggOutcome doWork() {
   if (EXTRA_DEBUG_1) {
 logger.debug("Starting outer loop of doWork()...");
   }
-  for (; underlyingIndex < currentBatchRecordCount; incIndex()) {
+  while (underlyingIndex < currentBatchRecordCount) {
 if (EXTRA_DEBUG_2) {
   logger.debug("Doing loop with values underlying {}, current {}", 
underlyingIndex, currentIndex);
 }
 checkGroupAndAggrValues(currentIndex);
+
+if ( retrySameIndex ) { retrySameIndex = false; }  // need to 
retry this row (e.g. we had an OOM)
--- End diff --

So why does "or before" have spaces ? :-)  


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745
>   Caused By 

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167183#comment-16167183
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r139045072
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
@@ -109,14 +107,21 @@
 
   private boolean isTwoPhase = false; // 1 phase or 2 phase aggr?
   private boolean is2ndPhase = false;
-  private boolean canSpill = true; // make it false in case can not spill
+  private boolean is1stPhase = false;
+  private boolean canSpill = true; // make it false in case can not 
spill/return-early
   private ChainedHashTable baseHashTable;
   private boolean earlyOutput = false; // when 1st phase returns a 
partition due to no memory
   private int earlyPartition = 0; // which partition to return early
-
-  private long memoryLimit; // max memory to be used by this oerator
-  private long estMaxBatchSize = 0; // used for adjusting #partitions
-  private long estRowWidth = 0;
+  private boolean retrySameIndex = false; // in case put failed during 1st 
phase - need to output early, then retry
+  private boolean useMemoryPrediction = false; // whether to use memory 
prediction to decide when to spill
+  private long estMaxBatchSize = 0; // used for adjusting #partitions and 
deciding when to spill
+  private long estRowWidth = 0; // the size of the internal "row" (keys + 
values + extra columns)
+  private long estValuesRowWidth = 0; // the size of the internal values ( 
values + extra )
+  private long estOutputRowWidth = 0; // the size of the output "row" (no 
extra columns)
+  private long estValuesBatchSize = 0; // used for "reserving" memory for 
the Values batch to overcome an OOM
+  private long estOutgoingAllocSize = 0; // used for "reserving" memory 
for the Outgoing Output Values to overcome an OOM
+  private long reserveValueBatchMemory; // keep "reserve memory" for 
Values Batch
+  private long reserveOutgoingMemory; // keep "reserve memory" for the 
Outgoing (Values only) output
--- End diff --

Will wait for some future cleanup opportunity.


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> 

[jira] [Created] (DRILL-5794) Projection pushdown does not preserve collation

2017-09-14 Thread Gautam Kumar Parai (JIRA)
Gautam Kumar Parai created DRILL-5794:
-

 Summary: Projection pushdown does not preserve collation
 Key: DRILL-5794
 URL: https://issues.apache.org/jira/browse/DRILL-5794
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.11.0
Reporter: Gautam Kumar Parai
Assignee: Gautam Kumar Parai


While look at the projection pushdown into scan rule in Drill it seems like we 
do not consider changes to collation. This would happen in general and not just 
for the projection pushdown across other rels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5793) NPE on close

2017-09-14 Thread Khurram Faraaz (JIRA)
Khurram Faraaz created DRILL-5793:
-

 Summary: NPE on close
 Key: DRILL-5793
 URL: https://issues.apache.org/jira/browse/DRILL-5793
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Affects Versions: 1.12.0
 Environment: Drill 1.12.0 commit : 
aaff1b35b7339fb4e6ab480dd517994ff9f0a5c5
Reporter: Khurram Faraaz


The code looks wrong:
{noformat}
 @Override
 public void close() throws Exception {
   options.close();
 }
If the shutdown occurs to early, options is not yet assigned and an NPE results.
{noformat}

{noformat}
2017-09-14 20:16:39,551 [main] DEBUG o.apache.drill.exec.server.Drillbit - 
Shutdown begun.
2017-09-14 20:16:41,560 [pool-5-thread-1] INFO  
o.a.drill.exec.rpc.user.UserServer - closed eventLoopGroup 
io.netty.channel.nio.NioEventLoopGroup@71a84ff4 in 1006 ms
2017-09-14 20:16:41,560 [pool-5-thread-2] INFO  
o.a.drill.exec.rpc.data.DataServer - closed eventLoopGroup 
io.netty.channel.nio.NioEventLoopGroup@f711283 in 1005 ms
2017-09-14 20:16:41,561 [pool-5-thread-1] INFO  
o.a.drill.exec.service.ServiceEngine - closed userServer in 1007 ms
2017-09-14 20:16:41,562 [pool-5-thread-2] DEBUG 
o.a.drill.exec.memory.BaseAllocator - closed allocator[rpc:bit-data].
2017-09-14 20:16:41,562 [pool-5-thread-2] INFO  
o.a.drill.exec.service.ServiceEngine - closed dataPool in 1008 ms
2017-09-14 20:16:41,563 [main] DEBUG o.a.drill.exec.memory.BaseAllocator - 
closed allocator[rpc:user].
2017-09-14 20:16:41,563 [main] DEBUG o.a.drill.exec.memory.BaseAllocator - 
closed allocator[rpc:bit-control].
2017-09-14 20:16:41,593 [main] DEBUG o.a.drill.exec.memory.BaseAllocator - 
closed allocator[ROOT].
2017-09-14 20:16:41,593 [main] WARN  o.apache.drill.exec.server.Drillbit - 
Failure on close()
java.lang.NullPointerException: null
at 
org.apache.drill.exec.server.options.SystemOptionManager.close(SystemOptionManager.java:369)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.server.DrillbitContext.close(DrillbitContext.java:241) 
~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.work.WorkManager.close(WorkManager.java:154) 
~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:173) 
[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:314) 
[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:290) 
[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:286) 
[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5269) SYSTEM ERROR: JsonMappingException: No suitable constructor found for type [simple type, class org.apache.drill.exec.store.direct.DirectSubScan]

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167074#comment-16167074
 ] 

ASF GitHub Bot commented on DRILL-5269:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/926


> SYSTEM ERROR: JsonMappingException: No suitable constructor found for type 
> [simple type, class org.apache.drill.exec.store.direct.DirectSubScan]
> 
>
> Key: DRILL-5269
> URL: https://issues.apache.org/jira/browse/DRILL-5269
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Anas
>Assignee: Vlad Rozov
>Priority: Critical
> Attachments: tc_sm_parquet.tar.gz
>
>
> I am a query that has nested joins. The query fails with the following 
> exception.
> {code}
> SYSTEM ERROR: JsonMappingException: No suitable constructor found for type 
> [simple type, class org.apache.drill.exec.store.direct.DirectSubScan]: can 
> not instantiate from JSON object (missing default constructor or creator, or 
> perhaps need to add/enable type information?)
>  at [Source: {
>   "pop" : "broadcast-sender",
>   "@id" : 0,
>   "receiver-major-fragment" : 1,
>   "child" : {
> "pop" : "selection-vector-remover",
> "@id" : 1,
> "child" : {
>   "pop" : "filter",
>   "@id" : 2,
>   "child" : {
> "pop" : "project",
> "@id" : 3,
> "exprs" : [ {
>   "ref" : "`__measure__10`",
>   "expr" : "`count`"
> } ],
> "child" : {
>   "pop" : "DirectSubScan",
>   "@id" : 4,
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "reader" : [ {
> "count" : 633
>   } ],
>   "cost" : 0.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 20.0
>   },
>   "expr" : "greater_than(`__measure__10`, 0) ",
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 10.0
>   },
>   "destinations" : [ {
> "minorFragmentId" : 0,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   }, {
> "minorFragmentId" : 1,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   } ],
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> }; line: 20, column: 11] (through reference chain: 
> org.apache.drill.exec.physical.config.BroadcastSender["child"]->org.apache.drill.exec.physical.config.SelectionVectorRemover["child"]->org.apache.drill.exec.physical.config.Filter["child"]->org.apache.drill.exec.physical.config.Project["child"])
> Fragment 3:0
> [Error Id: 9fb4ef4a-f118-4625-94f5-56c96dc7bdb4 on 192.168.0.100:31010]
>   (com.fasterxml.jackson.databind.JsonMappingException) No suitable 
> constructor found for type [simple type, class 
> org.apache.drill.exec.store.direct.DirectSubScan]: can not instantiate from 
> JSON object (missing default constructor or creator, or perhaps need to 
> add/enable type information?)
>  at [Source: {
>   "pop" : "broadcast-sender",
>   "@id" : 0,
>   "receiver-major-fragment" : 1,
>   "child" : {
> "pop" : "selection-vector-remover",
> "@id" : 1,
> "child" : {
>   "pop" : "filter",
>   "@id" : 2,
>   "child" : {
> "pop" : "project",
> "@id" : 3,
> "exprs" : [ {
>   "ref" : "`__measure__10`",
>   "expr" : "`count`"
> } ],
> "child" : {
>   "pop" : "DirectSubScan",
>   "@id" : 4,
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "reader" : [ {
> "count" : 633
>   } ],
>   "cost" : 0.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 20.0
>   },
>   "expr" : "greater_than(`__measure__10`, 0) ",
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : 10.0
>   },
>   "destinations" : [ {
> "minorFragmentId" : 0,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   }, {
> "minorFragmentId" : 1,
> "endpoint" : "Cg0xOTIuMTY4LjAuMTAwEKLyARij8gEgpPIB"
>   } ],
>   "initialAllocation" : 100,
>   "maxAllocation" : 100,
>   "cost" : 10.0
> }; line: 20, column: 11] (through reference chain: 
> 

[jira] [Closed] (DRILL-4595) FragmentExecutor.fail() should interrupt the fragment thread to avoid possible query hangs

2017-09-14 Thread Khurram Faraaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khurram Faraaz closed DRILL-4595.
-
Resolution: Fixed

> FragmentExecutor.fail() should interrupt the fragment thread to avoid 
> possible query hangs
> --
>
> Key: DRILL-4595
> URL: https://issues.apache.org/jira/browse/DRILL-4595
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Deneche A. Hakim
>Assignee: Deneche A. Hakim
> Fix For: Future
>
>
> When a fragment fails it's assumed it will be able to close itself and send 
> it's FAILED state to the foreman which will cancel any running fragments. 
> FragmentExecutor.cancel() will interrupt the thread making sure those 
> fragment don't stay blocked.
> However, if a fragment is already blocked when it's fail method is called the 
> foreman may never be notified about this and the query will hang forever. One 
> such scenario is the following:
> - generally it's a CTAS running on a large cluster (lot's of writers running 
> in parallel)
> - logs show that the user channel was closed and UserServer caused the root 
> fragment to move to a FAILED state
> - jstack shows that the root fragment is blocked in it's receiver waiting for 
> data
> - jstack also shows that ALL other fragments are no longer running, and the 
> logs show that all of them succeeded
> - the foreman waits *forever* for the root fragment to finish



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-4595) FragmentExecutor.fail() should interrupt the fragment thread to avoid possible query hangs

2017-09-14 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16167006#comment-16167006
 ] 

Khurram Faraaz commented on DRILL-4595:
---

Verified on Drill 1.12.0 commit id  aaff1b35b7339fb4e6ab480dd517994ff9f0a5c5

lowered the memory
{noformat}
export DRILL_HEAP=${DRILL_HEAP:-"1G"}
export DRILL_MAX_DIRECT_MEMORY=${DRILL_MAX_DIRECT_MEMORY:-"1G"}
{noformat}

Ran a long running CTAS
{noformat}
0: jdbc:drill:schema=dfs.tmp> CREATE TABLE tbl_4595 PARTITION BY (key2) AS 
SELECT * FROM `twoKeyJsn.json` t;
+---++
| Fragment  | Number of records written  |
+---++
| 0_0   | 26212355   |
+---++
1 row selected (511.547 seconds)
{noformat}

> FragmentExecutor.fail() should interrupt the fragment thread to avoid 
> possible query hangs
> --
>
> Key: DRILL-4595
> URL: https://issues.apache.org/jira/browse/DRILL-4595
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Deneche A. Hakim
>Assignee: Deneche A. Hakim
> Fix For: Future
>
>
> When a fragment fails it's assumed it will be able to close itself and send 
> it's FAILED state to the foreman which will cancel any running fragments. 
> FragmentExecutor.cancel() will interrupt the thread making sure those 
> fragment don't stay blocked.
> However, if a fragment is already blocked when it's fail method is called the 
> foreman may never be notified about this and the query will hang forever. One 
> such scenario is the following:
> - generally it's a CTAS running on a large cluster (lot's of writers running 
> in parallel)
> - logs show that the user channel was closed and UserServer caused the root 
> fragment to move to a FAILED state
> - jstack shows that the root fragment is blocked in it's receiver waiting for 
> data
> - jstack also shows that ALL other fragments are no longer running, and the 
> logs show that all of them succeeded
> - the foreman waits *forever* for the root fragment to finish



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5431) Support SSL

2017-09-14 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166725#comment-16166725
 ] 

Laurent Goujon commented on DRILL-5431:
---

(It is also regrettable that OpenJDK/Oracle JDK has a different trust store 
than the OS one, or no way of integrating it)

> Support SSL
> ---
>
> Key: DRILL-5431
> URL: https://issues.apache.org/jira/browse/DRILL-5431
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - Java, Client - ODBC
>Reporter: Sudheesh Katkam
>Assignee: Sudheesh Katkam
>
> Support SSL between Drillbit and JDBC/ODBC drivers. Drill already supports 
> HTTPS for web traffic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5431) Support SSL

2017-09-14 Thread Laurent Goujon (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166721#comment-16166721
 ] 

Laurent Goujon commented on DRILL-5431:
---

I had a cursory look at the commits. One comment I have is about maybe some 
lack of flexibility regarding integration with external truststore for the 
client (and similar for hostname verification).

For the JDBC driver, because of the way of driver is loaded, and connections 
are created, everything has to be done through connection properties :( But it 
doesn't have to be the case for the C++ and Java Drill clients. I believe that 
similar to http clients where you can provide trust stores and hostname 
verifier, these clients should have the same capabilities. It is probably more 
of the requirement for the C++ client than the Java one, as JRE comes up with a 
truststore similar to the browser ones, whereas OpenSSL library has maybe none 
on windows (or for organizations having their own CA integrated at the OS 
level, might require special care). Maybe [~robertw] has some opinion on this 
based on this experience on integrating drivers on various platforms?

> Support SSL
> ---
>
> Key: DRILL-5431
> URL: https://issues.apache.org/jira/browse/DRILL-5431
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - Java, Client - ODBC
>Reporter: Sudheesh Katkam
>Assignee: Sudheesh Katkam
>
> Support SSL between Drillbit and JDBC/ODBC drivers. Drill already supports 
> HTTPS for web traffic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5564) IllegalStateException: allocator[op:21:1:5:HashJoinPOP]: buffer space (16674816) + prealloc space (0) + child space (0) != allocated (16740352)

2017-09-14 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166528#comment-16166528
 ] 

Khurram Faraaz commented on DRILL-5564:
---

Verified by executing same concurrent test on Drill 1.12.0 commit aaff1b3
The Exception is not seen.

> IllegalStateException: allocator[op:21:1:5:HashJoinPOP]: buffer space 
> (16674816) + prealloc space (0) + child space (0) != allocated (16740352)
> ---
>
> Key: DRILL-5564
> URL: https://issues.apache.org/jira/browse/DRILL-5564
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.11.0
> Environment: 3 node CentOS cluster
>Reporter: Khurram Faraaz
>
> Run a concurrent Java program that executes TPCDS query11
> while the above concurrent java program is under execution
> stop foreman Drillbit (from another shell, using below command)
> ./bin/drillbit.sh stop
> and you will see the IllegalStateException: allocator[op:21:1:5:HashJoinPOP]: 
>  and another assertion error, in the drillbit.log
> AssertionError: Failure while stopping processing for operator id 10. 
> Currently have states of processing:false, setup:false, waiting:true.   
> Drill 1.11.0 git commit ID: d11aba2 (with assertions enabled)
>  
> details from drillbit.log from the foreman Drillbit node.
> {noformat}
> 2017-06-05 18:38:33,838 [26ca5afa-7f6d-991b-1fdf-6196faddc229:frag:23:1] INFO 
>  o.a.d.e.w.fragment.FragmentExecutor - 
> 26ca5afa-7f6d-991b-1fdf-6196faddc229:23:1: State change requested RUNNING --> 
> FAILED
> 2017-06-05 18:38:33,849 [26ca5afa-7f6d-991b-1fdf-6196faddc229:frag:23:1] INFO 
>  o.a.d.e.w.fragment.FragmentExecutor - 
> 26ca5afa-7f6d-991b-1fdf-6196faddc229:23:1: State change requested FAILED --> 
> FINISHED
> 2017-06-05 18:38:33,852 [26ca5afa-7f6d-991b-1fdf-6196faddc229:frag:23:1] 
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: AssertionError: 
> Failure while stopping processing for operator id 10. Currently have states 
> of processing:false, setup:false, waiting:true.
> Fragment 23:1
> [Error Id: a116b326-43ed-4569-a20e-a10ba03d215e on centos-01.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> AssertionError: Failure while stopping processing for operator id 10. 
> Currently have states of processing:false, setup:false, waiting:true.
> Fragment 23:1
> [Error Id: a116b326-43ed-4569-a20e-a10ba03d215e on centos-01.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.RuntimeException: java.lang.AssertionError: Failure 
> while stopping processing for operator id 10. Currently have states of 
> processing:false, setup:false, waiting:true.
> at 
> org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:101)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:409)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> ... 4 common frames omitted
> Caused by: java.lang.AssertionError: Failure while stopping processing for 
> operator id 10. Currently have states of processing:false, setup:false, 
> waiting:true.
> at 
> org.apache.drill.exec.ops.OperatorStats.stopProcessing(OperatorStats.java:167)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:255) 
> ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> 

[jira] [Closed] (DRILL-5564) IllegalStateException: allocator[op:21:1:5:HashJoinPOP]: buffer space (16674816) + prealloc space (0) + child space (0) != allocated (16740352)

2017-09-14 Thread Khurram Faraaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khurram Faraaz closed DRILL-5564.
-
Resolution: Fixed

> IllegalStateException: allocator[op:21:1:5:HashJoinPOP]: buffer space 
> (16674816) + prealloc space (0) + child space (0) != allocated (16740352)
> ---
>
> Key: DRILL-5564
> URL: https://issues.apache.org/jira/browse/DRILL-5564
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.11.0
> Environment: 3 node CentOS cluster
>Reporter: Khurram Faraaz
>
> Run a concurrent Java program that executes TPCDS query11
> while the above concurrent java program is under execution
> stop foreman Drillbit (from another shell, using below command)
> ./bin/drillbit.sh stop
> and you will see the IllegalStateException: allocator[op:21:1:5:HashJoinPOP]: 
>  and another assertion error, in the drillbit.log
> AssertionError: Failure while stopping processing for operator id 10. 
> Currently have states of processing:false, setup:false, waiting:true.   
> Drill 1.11.0 git commit ID: d11aba2 (with assertions enabled)
>  
> details from drillbit.log from the foreman Drillbit node.
> {noformat}
> 2017-06-05 18:38:33,838 [26ca5afa-7f6d-991b-1fdf-6196faddc229:frag:23:1] INFO 
>  o.a.d.e.w.fragment.FragmentExecutor - 
> 26ca5afa-7f6d-991b-1fdf-6196faddc229:23:1: State change requested RUNNING --> 
> FAILED
> 2017-06-05 18:38:33,849 [26ca5afa-7f6d-991b-1fdf-6196faddc229:frag:23:1] INFO 
>  o.a.d.e.w.fragment.FragmentExecutor - 
> 26ca5afa-7f6d-991b-1fdf-6196faddc229:23:1: State change requested FAILED --> 
> FINISHED
> 2017-06-05 18:38:33,852 [26ca5afa-7f6d-991b-1fdf-6196faddc229:frag:23:1] 
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: AssertionError: 
> Failure while stopping processing for operator id 10. Currently have states 
> of processing:false, setup:false, waiting:true.
> Fragment 23:1
> [Error Id: a116b326-43ed-4569-a20e-a10ba03d215e on centos-01.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> AssertionError: Failure while stopping processing for operator id 10. 
> Currently have states of processing:false, setup:false, waiting:true.
> Fragment 23:1
> [Error Id: a116b326-43ed-4569-a20e-a10ba03d215e on centos-01.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.RuntimeException: java.lang.AssertionError: Failure 
> while stopping processing for operator id 10. Currently have states of 
> processing:false, setup:false, waiting:true.
> at 
> org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:101)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:409)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> ... 4 common frames omitted
> Caused by: java.lang.AssertionError: Failure while stopping processing for 
> operator id 10. Currently have states of processing:false, setup:false, 
> waiting:true.
> at 
> org.apache.drill.exec.ops.OperatorStats.stopProcessing(OperatorStats.java:167)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:255) 
> ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  

[jira] [Closed] (DRILL-4273) If query is cancelled during external sort memory is leaked, merge join fragment is running forever on a forman node

2017-09-14 Thread Khurram Faraaz (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Khurram Faraaz closed DRILL-4273.
-
Resolution: Fixed

Verified on 1.12.0-SNAPSHOT  commit : aaff1b35b7339fb4e6ab480dd517994ff9f0a5c5


> If query is cancelled during external sort memory is leaked, merge join 
> fragment is running forever on a forman node
> 
>
> Key: DRILL-4273
> URL: https://issues.apache.org/jira/browse/DRILL-4273
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.5.0
>Reporter: Victoria Markman
>Priority: Critical
> Attachments: 2967b65e-ea42-a736-9f5d-3513914ada88.sys.drill, 
> after_shutdown.log, drillbit.log.node_133, drillbit.log.node_134, 
> drillbit.log.node_135, drillbit.log.node_136
>
>
> Query was cancelled during external sort.
> Here is what happened:
> 1. Query got stuck in CANCELLATION_REQUESTED mode: ( queryid = 
> 2967b65e-ea42-a736-9f5d-3513914ada88 )
> 2. On a forman node 6:0 (executing merge join) fragment kept running forever 
> (see stack below)
> {code}
> "2967b65e-ea42-a736-9f5d-3513914ada88:frag:6:0" daemon prio=10 
> tid=0x01af5800 nid=0x769b runnable [0x7fa82fc7c000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Throwable.getStackTraceElement(Native Method)
> at java.lang.Throwable.getOurStackTrace(Throwable.java:827)
> - locked <0x0006eef95f80> (a java.lang.Exception)
> at java.lang.Throwable.getStackTrace(Throwable.java:816)
> at java.lang.Thread.getStackTrace(Thread.java:1589)
> at org.apache.drill.common.StackTrace.(StackTrace.java:33)
> at 
> org.apache.drill.common.HistoricalLog$Event.(HistoricalLog.java:39)
> at 
> org.apache.drill.common.HistoricalLog.recordEvent(HistoricalLog.java:95)
> - locked <0x0006eef95870> (a 
> org.apache.drill.common.HistoricalLog)
> at io.netty.buffer.DrillBuf.(DrillBuf.java:84)
> at 
> org.apache.drill.exec.memory.AllocatorManager$BufferLedger.newDrillBuf(AllocatorManager.java:285)
> at 
> org.apache.drill.exec.memory.BaseAllocator.bufferWithoutReservation(BaseAllocator.java:222)
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:204)
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:177)
> at 
> org.apache.drill.exec.vector.IntVector.allocateBytes(IntVector.java:201)
> at 
> org.apache.drill.exec.vector.IntVector.allocateNew(IntVector.java:183)
> at 
> org.apache.drill.exec.vector.NullableIntVector.allocateNew(NullableIntVector.java:216)
> at 
> org.apache.drill.exec.vector.AllocationHelper.allocateNew(AllocationHelper.java:56)
> at 
> org.apache.drill.exec.physical.impl.join.MergeJoinBatch.allocateBatch(MergeJoinBatch.java:429)
> at 
> org.apache.drill.exec.physical.impl.join.MergeJoinBatch.innerNext(MergeJoinBatch.java:172)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> at 
> org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:295)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> 

[jira] [Created] (DRILL-5791) Unit test Jackson polymorphic unmarshalling

2017-09-14 Thread Vlad Rozov (JIRA)
Vlad Rozov created DRILL-5791:
-

 Summary: Unit test Jackson polymorphic unmarshalling
 Key: DRILL-5791
 URL: https://issues.apache.org/jira/browse/DRILL-5791
 Project: Apache Drill
  Issue Type: Test
Reporter: Vlad Rozov
Assignee: Vlad Rozov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5761) Disable Lilith ClassicMultiplexSocketAppender by default

2017-09-14 Thread Volodymyr Vysotskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Vysotskyi updated DRILL-5761:
---
Description: 
When running unit tests on the node where Hiveserver2 service is running, tests 
run hangs in the middle. Jstack shows that some threads are waiting for a 
condition.
{noformat}
Full thread dump

"main" prio=10 tid=0x7f0998009800 nid=0x17f7 waiting on condition 
[0x7f09a0c6d000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00076004ebf0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:324)
at 
de.huxhorn.lilith.sender.MultiplexSendBytesService.sendBytes(MultiplexSendBytesService.java:132)
at 
de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.sendBytes(MultiplexSocketAppenderBase.java:336)
at 
de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.append(MultiplexSocketAppenderBase.java:348)
at 
ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
at 
ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:272)
at ch.qos.logback.classic.Logger.callAppenders(Logger.java:259)
at 
ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:441)
at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:395)
at ch.qos.logback.classic.Logger.error(Logger.java:558)
at 
org.apache.drill.test.DrillTest$TestLogReporter.failed(DrillTest.java:153)
at org.junit.rules.TestWatcher.failedQuietly(TestWatcher.java:84)
at org.junit.rules.TestWatcher.access$300(TestWatcher.java:46)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:62)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:113)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:94)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:58)
at 
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:134)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


"Thread-112" prio=10 tid=0x7f099911c800 nid=0x1caa waiting on condition 
[0x7f09685f3000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
 

[jira] [Commented] (DRILL-5749) Foreman and Netty threads occure deadlock

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166143#comment-16166143
 ] 

ASF GitHub Bot commented on DRILL-5749:
---

GitHub user weijietong opened a pull request:

https://github.com/apache/drill/pull/943

DRILL-5749: solve deadlock between foreman and netty threads

@paul-rogers please review this PR again ,fail to squash the commits at 
last PR, sorry about that.  

related thread stack, please see 
[DRILL-5749](https://issues.apache.org/jira/browse/DRILL-5749).
process is to break the nested condition invoke .

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/weijietong/drill drill-5749

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/943.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #943


commit b44f780a948c4a0898e7cee042c0590f0713f780
Author: weijietong 
Date:   2017-06-08T08:03:46Z

Merge pull request #1 from apache/master

sync

commit d045c757c80a759b435479cc89f33c749fc16ac2
Author: weijie.tong 
Date:   2017-08-11T08:01:36Z

Merge branch 'master' of github.com:weijietong/drill

commit 08b7006f4c70c45a17ebf7eae6beaa2bdb0d0454
Author: weijie.tong 
Date:   2017-08-20T12:05:51Z

update

commit 9e9ebb497a183e61a72665019e6e04070d912027
Author: weijie.tong 
Date:   2017-08-20T12:07:41Z

revert

commit 837d9fc58440fb584690f93b5f638ddcedf042a1
Author: weijie.tong 
Date:   2017-08-22T10:35:12Z

Merge branch 'master' of github.com:apache/drill

commit b1fc840ad9d0a9959b05a84bfd17f17067def32d
Author: weijie.tong 
Date:   2017-08-29T16:39:48Z

Merge branch 'master' of github.com:apache/drill

commit 52d7a0b795cf2ef29c596e84277cc01f1c105d19
Author: weijie.tong 
Date:   2017-09-14T11:55:26Z

Merge branch 'master' of github.com:apache/drill

commit 2fbc23998ff5c8cb8a2a476221be856d69a559c4
Author: weijie.tong 
Date:   2017-09-14T12:02:55Z

solve deadlock occured between foreman and netty threads




> Foreman and Netty threads occure deadlock 
> --
>
> Key: DRILL-5749
> URL: https://issues.apache.org/jira/browse/DRILL-5749
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: weijie.tong
>Priority: Critical
>
> when the cluster was in high concurrency query and the reused control 
> connection occured exceptoin, the foreman and netty threads both try to 
> acquire each other's lock then deadlock occured.  The netty thread hold the 
> map (RequestIdMap) lock then try to acquire the ReconnectingConnection lock 
> to send command, while the foreman thread hold the ReconnectingConnection 
> lock then try to acquire the RequestIdMap lock. So the deadlock happend.
> Below is the jstack dump:
> Found one Java-level deadlock:
> =
> "265aa5cb-e5e2-39ed-9c2f-7658b905372e:foreman":
>   waiting to lock monitor 0x7f935b721f48 (object 0x000656affc40, a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager),
>   which is held by "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman"
> "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman":
>   waiting to lock monitor 0x7f90de3b9648 (object 0x0006b524d7e8, a 
> com.carrotsearch.hppc.IntObjectHashMap),
>   which is held by "BitServer-2"
> "BitServer-2":
>   waiting to lock monitor 0x7f935b721f48 (object 0x000656affc40, a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager),
>   which is held by "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman"
> Java stack information for the threads listed above:
> ===
> "265aa5cb-e5e2-39ed-9c2f-7658b905372e:foreman":
>   at 
> org.apache.drill.exec.rpc.ReconnectingConnection.runCommand(ReconnectingConnection.java:72)
>   - waiting to lock <0x000656affc40> (a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel.sendFragments(ControlTunnel.java:66)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.sendRemoteFragments(Foreman.java:1210)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.setupNonRootFragments(Foreman.java:1141)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:454)
>   at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1045)
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:274)

[jira] [Commented] (DRILL-5749) Foreman and Netty threads occure deadlock

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166126#comment-16166126
 ] 

ASF GitHub Bot commented on DRILL-5749:
---

Github user weijietong closed the pull request at:

https://github.com/apache/drill/pull/925


> Foreman and Netty threads occure deadlock 
> --
>
> Key: DRILL-5749
> URL: https://issues.apache.org/jira/browse/DRILL-5749
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: weijie.tong
>Priority: Critical
>
> when the cluster was in high concurrency query and the reused control 
> connection occured exceptoin, the foreman and netty threads both try to 
> acquire each other's lock then deadlock occured.  The netty thread hold the 
> map (RequestIdMap) lock then try to acquire the ReconnectingConnection lock 
> to send command, while the foreman thread hold the ReconnectingConnection 
> lock then try to acquire the RequestIdMap lock. So the deadlock happend.
> Below is the jstack dump:
> Found one Java-level deadlock:
> =
> "265aa5cb-e5e2-39ed-9c2f-7658b905372e:foreman":
>   waiting to lock monitor 0x7f935b721f48 (object 0x000656affc40, a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager),
>   which is held by "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman"
> "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman":
>   waiting to lock monitor 0x7f90de3b9648 (object 0x0006b524d7e8, a 
> com.carrotsearch.hppc.IntObjectHashMap),
>   which is held by "BitServer-2"
> "BitServer-2":
>   waiting to lock monitor 0x7f935b721f48 (object 0x000656affc40, a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager),
>   which is held by "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman"
> Java stack information for the threads listed above:
> ===
> "265aa5cb-e5e2-39ed-9c2f-7658b905372e:foreman":
>   at 
> org.apache.drill.exec.rpc.ReconnectingConnection.runCommand(ReconnectingConnection.java:72)
>   - waiting to lock <0x000656affc40> (a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel.sendFragments(ControlTunnel.java:66)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.sendRemoteFragments(Foreman.java:1210)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.setupNonRootFragments(Foreman.java:1141)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:454)
>   at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1045)
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
>   at java.lang.Thread.run(Thread.java:849)
> "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman":
>   at 
> org.apache.drill.exec.rpc.RequestIdMap.createNewRpcListener(RequestIdMap.java:87)
>   - waiting to lock <0x0006b524d7e8> (a 
> com.carrotsearch.hppc.IntObjectHashMap)
>   at 
> org.apache.drill.exec.rpc.AbstractRemoteConnection.createNewRpcListener(AbstractRemoteConnection.java:153)
>   at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:115)
>   at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:89)
>   at 
> org.apache.drill.exec.rpc.control.ControlConnection.send(ControlConnection.java:65)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel$SendFragment.doRpcCall(ControlTunnel.java:160)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel$SendFragment.doRpcCall(ControlTunnel.java:150)
>   at 
> org.apache.drill.exec.rpc.ListeningCommand.connectionAvailable(ListeningCommand.java:38)
>   at 
> org.apache.drill.exec.rpc.ReconnectingConnection.runCommand(ReconnectingConnection.java:75)
>   - locked <0x000656affc40> (a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel.sendFragments(ControlTunnel.java:66)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.sendRemoteFragments(Foreman.java:1210)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.setupNonRootFragments(Foreman.java:1141)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:454)
>   at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1045)
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
>   

[jira] [Commented] (DRILL-5781) Fix unit test failures to use tests config even if default config is available

2017-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166007#comment-16166007
 ] 

ASF GitHub Bot commented on DRILL-5781:
---

GitHub user vvysotskyi opened a pull request:

https://github.com/apache/drill/pull/942

DRILL-5781: Fix unit test failures to use tests config even if default 
config is available

Please see [DRILL-5781](https://issues.apache.org/jira/browse/DRILL-5781) 
for details.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vvysotskyi/drill DRILL-5781

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/942.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #942


commit b47b4d760adc62e1625d23e80aae611a54ea9e28
Author: Volodymyr Vysotskyi 
Date:   2017-09-07T18:01:12Z

DRILL-5781: Fix unit test failures to use tests config even if default 
config is available




> Fix unit test failures to use tests config even if default config is available
> --
>
> Key: DRILL-5781
> URL: https://issues.apache.org/jira/browse/DRILL-5781
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>
> Unit tests fail when they are run with the mapr profile.
> Tests failures, connected with the Zookeeper configuration that differs from 
> expected:
> {noformat}
> DrillClientTest>TestWithZookeeper.setUp:32 » Runtime java.io.IOException: 
> Coul...
>   TestZookeeperClient.testPutWithMatchingVersion » IO Could not configure 
> server...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testStartingClientEnablesCacheAndEnsuresRootNodeExists 
> » IO
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testHasPathThrowsDrillRuntimeException » IO Could not 
> conf...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testHasPathFalseWithVersion » IO Could not configure 
> serve...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestEphemeralStore.testPutAndGetWorksAntagonistacally » IO Could not 
> configure...
>   TestEphemeralStore.tearDown:132 NullPointer
>   TestZookeeperClient.testGetWithVersion » IO Could not configure server 
> because...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestEphemeralStore.testStoreRegistersDispatcherAndStartsItsClient » IO 
> Could n...
>   TestEphemeralStore.tearDown:132 NullPointer
>   TestZookeeperClient.testPutWithNonMatchingVersion » IO Could not configure 
> ser...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testGetWithEventualConsistencyHitsCache » IO Could not 
> con...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testPutIfAbsentWhenPresent » IO Could not configure 
> server...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testHasPathTrueWithVersion » IO Could not configure 
> server...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testPutAndGetWorks » IO Could not configure server 
> because...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testPutIfAbsentWhenAbsent » IO Could not configure 
> server ...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testHasPathWithEventualConsistencyHitsCache » IO Could 
> not...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testCreate » IO Could not configure server because SASL 
> co...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testDelete » IO Could not configure server because SASL 
> co...
>   TestZookeeperClient.tearDown:86 NullPointer
>   TestZookeeperClient.testEntriesReturnsRelativePaths » IO Could not 
> configure s...
>   TestZookeeperClient.tearDown:86 NullPointer
> TestPStoreProviders>TestWithZookeeper.setUp:32 » Runtime java.io.IOException: 
> ...
>   TestPauseInjection.pauseOnSpecificBit:151 » Runtime java.io.IOException: 
> Could...
>   TestExceptionInjection.injectionOnSpecificBit:217 » Runtime 
> java.io.IOExceptio...
> HBaseTestsSuite.initCluster:110 » IO No JAAS configuration section named 
> 'Serv...
> {noformat}
> Test failures, connected with Hadoop configuration that differs from expected:
> {noformat}
> TestInboundImpersonation.setup:58->BaseTestImpersonation.startMiniDfsCluster:80->BaseTestImpersonation.startMiniDfsCluster:111
>  » ClassCast
>   
> TestImpersonationMetadata.setup:58->BaseTestImpersonation.startMiniDfsCluster:80->BaseTestImpersonation.startMiniDfsCluster:111
>  » ClassCast
>   
> 

[jira] [Created] (DRILL-5790) PCAP format explicitly opens local file

2017-09-14 Thread Ted Dunning (JIRA)
Ted Dunning created DRILL-5790:
--

 Summary: PCAP format explicitly opens local file
 Key: DRILL-5790
 URL: https://issues.apache.org/jira/browse/DRILL-5790
 Project: Apache Drill
  Issue Type: Bug
Reporter: Ted Dunning


Note the new FileInputStream line
{code}
@Override
public void setup(final OperatorContext context, final OutputMutator output) 
throws ExecutionSetupException {
try {
this.output = output;
this.buffer = new byte[10];
this.in = new FileInputStream(inputPath);
this.decoder = new PacketDecoder(in);
this.validBytes = in.read(buffer);
this.projectedCols = getProjectedColsIfItNull();
setColumns(projectedColumns);
} catch (IOException io) {
throw UserException.dataReadError(io)
.addContext("File name:", inputPath)
.build(logger);
}
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)