[jira] [Resolved] (DRILL-5522) OOM during the merge and spill process of the managed external sort

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou resolved DRILL-5522.
---
Resolution: Fixed

This has been resolved.

> OOM during the merge and spill process of the managed external sort
> ---
>
> Key: DRILL-5522
> URL: https://issues.apache.org/jira/browse/DRILL-5522
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Attachments: 26e334aa-1afa-753f-3afe-862f76b80c18.sys.drill, 
> drillbit.log, drillbit.out, drill-env.sh
>
>
> git.commit.id.abbrev=1e0a14c
> The below query fails with an OOM
> {code}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.memory.max_query_memory_per_node` = 1552428800;
> create table dfs.drillTestDir.xsort_ctas3_multiple partition by (type, aCol) 
> as select type, rptds, rms, s3.rms.a aCol, uid from (
>   select * from (
> select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid
> from (
>   select d.type type, d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid
> ) s1
>   ) s2
>   order by s2.rms.mapid, s2.rptds.a
> ) s3;
> {code}
> Stack trace
> {code}
> 2017-05-17 15:15:35,027 [26e334aa-1afa-753f-3afe-862f76b80c18:frag:4:2] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes 
> ran out of memory while executing the query. (Unable to allocate buffer of 
> size 2097152 due to memory limit. Current allocation: 29229064)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
> nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 2097152 due to memory limit. Current 
> allocation: 29229064
> [Error Id: 619e2e34-704c-4964-a354-1348fb33ce8a ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_111]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 2097152 due to memory limit. Current allocation: 
> 29229064
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:220) 
> ~[drill-memory-base-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:195) 
> ~[drill-memory-base-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.BigIntVector.reAlloc(BigIntVector.java:212) 
> ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.BigIntVector.copyFromSafe(BigIntVector.java:324) 
> ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.NullableBigIntVector.copyFromSafe(NullableBigIntVector.java:367)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.NullableBigIntVector$TransferImpl.copyValueSafe(NullableBigIntVector.java:328)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.copyValueSafe(RepeatedMapVector.java:360)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe(MapVector.java:220)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.complex.MapVector.copyFromSafe(MapVector.java:82)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen49.doCopy(PriorityQueueCopierTemplate.java:34)
>  ~[na:na]
> at 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen49.next(PriorityQueueCopierTemplate.java:76)
>  ~[na:na]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.CopierHolder$BatchMerger.next(CopierHolder.java:234)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> 

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162417#comment-16162417
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r138240616
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
@@ -297,10 +302,7 @@ public void outputRecordValues(@Named("htRowIdx") int 
htRowIdx, @Named("outRowId
   }
 
   @Override
-  public void setup(HashAggregate hashAggrConfig, HashTableConfig 
htConfig, FragmentContext context,
-OperatorStats stats, OperatorContext oContext, 
RecordBatch incoming, HashAggBatch outgoing,
-LogicalExpression[] valueExprs, List 
valueFieldIds, TypedFieldId[] groupByOutFieldIds,
-VectorContainer outContainer) throws 
SchemaChangeException, IOException {
+  public void setup(HashAggregate hashAggrConfig, HashTableConfig 
htConfig, FragmentContext context, OperatorStats stats, OperatorContext 
oContext, RecordBatch incoming, HashAggBatch outgoing, LogicalExpression[] 
valueExprs, List valueFieldIds, TypedFieldId[] 
groupByOutFieldIds, VectorContainer outContainer, int extraRowBytes) throws 
SchemaChangeException, IOException {
--- End diff --

Removed one argument "stats" - can be taken from the "oContext"


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> 

[jira] [Closed] (DRILL-5522) OOM during the merge and spill process of the managed external sort

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5522.
-

This has been verified.

> OOM during the merge and spill process of the managed external sort
> ---
>
> Key: DRILL-5522
> URL: https://issues.apache.org/jira/browse/DRILL-5522
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Attachments: 26e334aa-1afa-753f-3afe-862f76b80c18.sys.drill, 
> drillbit.log, drillbit.out, drill-env.sh
>
>
> git.commit.id.abbrev=1e0a14c
> The below query fails with an OOM
> {code}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.memory.max_query_memory_per_node` = 1552428800;
> create table dfs.drillTestDir.xsort_ctas3_multiple partition by (type, aCol) 
> as select type, rptds, rms, s3.rms.a aCol, uid from (
>   select * from (
> select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid
> from (
>   select d.type type, d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid
> ) s1
>   ) s2
>   order by s2.rms.mapid, s2.rptds.a
> ) s3;
> {code}
> Stack trace
> {code}
> 2017-05-17 15:15:35,027 [26e334aa-1afa-753f-3afe-862f76b80c18:frag:4:2] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes 
> ran out of memory while executing the query. (Unable to allocate buffer of 
> size 2097152 due to memory limit. Current allocation: 29229064)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
> nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 2097152 due to memory limit. Current 
> allocation: 29229064
> [Error Id: 619e2e34-704c-4964-a354-1348fb33ce8a ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_111]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 2097152 due to memory limit. Current allocation: 
> 29229064
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:220) 
> ~[drill-memory-base-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:195) 
> ~[drill-memory-base-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.BigIntVector.reAlloc(BigIntVector.java:212) 
> ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.BigIntVector.copyFromSafe(BigIntVector.java:324) 
> ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.NullableBigIntVector.copyFromSafe(NullableBigIntVector.java:367)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.NullableBigIntVector$TransferImpl.copyValueSafe(NullableBigIntVector.java:328)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.copyValueSafe(RepeatedMapVector.java:360)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe(MapVector.java:220)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.vector.complex.MapVector.copyFromSafe(MapVector.java:82)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen49.doCopy(PriorityQueueCopierTemplate.java:34)
>  ~[na:na]
> at 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen49.next(PriorityQueueCopierTemplate.java:76)
>  ~[na:na]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.CopierHolder$BatchMerger.next(CopierHolder.java:234)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeSpilledRuns(ExternalSortBatch.java:1214)
>  

[jira] [Resolved] (DRILL-5443) Managed External Sort fails with OOM while spilling to disk

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou resolved DRILL-5443.
---
Resolution: Fixed

This has been resolved.

> Managed External Sort fails with OOM while spilling to disk
> ---
>
> Key: DRILL-5443
> URL: https://issues.apache.org/jira/browse/DRILL-5443
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 265a014b-8cae-30b5-adab-ff030b6c7086.sys.drill, 
> 27016969-ef53-40dc-b582-eea25371fa1c.sys.drill, drill5443.drillbit.log, 
> drillbit.log
>
>
> git.commit.id.abbrev=3e8b01d
> The below query fails with an OOM
> {code}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 52428800;
> select s1.type type, flatten(s1.rms.rptd) rptds from (select d.type type, 
> d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid) s1 
> order by s1.rms.mapid;
> {code}
> Exception from the logs
> {code}
> 2017-04-24 17:22:59,439 [27016969-ef53-40dc-b582-eea25371fa1c:frag:0:0] INFO  
> o.a.d.e.p.i.x.m.ExternalSortBatch - User Error Occurred: External Sort 
> encountered an error while spilling to disk (Unable to allocate buffer of 
> size 524288 (rounded from 307197) due to memory limit. Current allocation: 
> 25886728)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: External 
> Sort encountered an error while spilling to disk
> [Error Id: a64e3790-3a34-42c8-b4ea-4cb1df780e63 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.doMergeAndSpill(ExternalSortBatch.java:1445)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:1376)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeRuns(ExternalSortBatch.java:1372)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.consolidateBatches(ExternalSortBatch.java:1299)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeSpilledRuns(ExternalSortBatch.java:1195)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:689)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  

[jira] [Closed] (DRILL-5443) Managed External Sort fails with OOM while spilling to disk

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5443.
-

This has been verified.

> Managed External Sort fails with OOM while spilling to disk
> ---
>
> Key: DRILL-5443
> URL: https://issues.apache.org/jira/browse/DRILL-5443
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 265a014b-8cae-30b5-adab-ff030b6c7086.sys.drill, 
> 27016969-ef53-40dc-b582-eea25371fa1c.sys.drill, drill5443.drillbit.log, 
> drillbit.log
>
>
> git.commit.id.abbrev=3e8b01d
> The below query fails with an OOM
> {code}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 52428800;
> select s1.type type, flatten(s1.rms.rptd) rptds from (select d.type type, 
> d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid) s1 
> order by s1.rms.mapid;
> {code}
> Exception from the logs
> {code}
> 2017-04-24 17:22:59,439 [27016969-ef53-40dc-b582-eea25371fa1c:frag:0:0] INFO  
> o.a.d.e.p.i.x.m.ExternalSortBatch - User Error Occurred: External Sort 
> encountered an error while spilling to disk (Unable to allocate buffer of 
> size 524288 (rounded from 307197) due to memory limit. Current allocation: 
> 25886728)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: External 
> Sort encountered an error while spilling to disk
> [Error Id: a64e3790-3a34-42c8-b4ea-4cb1df780e63 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.doMergeAndSpill(ExternalSortBatch.java:1445)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:1376)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeRuns(ExternalSortBatch.java:1372)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.consolidateBatches(ExternalSortBatch.java:1299)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.mergeSpilledRuns(ExternalSortBatch.java:1195)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:689)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 

[jira] [Closed] (DRILL-5253) External sort fails with OOM error (Fails to allocate sv2)

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5253.
-

This has been verified.

> External sort fails with OOM error (Fails to allocate sv2)
> --
>
> Key: DRILL-5253
> URL: https://issues.apache.org/jira/browse/DRILL-5253
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 2762f36d-a2e7-5582-922d-3c4626be18c0.sys.drill
>
>
> git.commit.id.abbrev=2af709f
> The data set used in the below query has the same value for every column in 
> every row. The query fails with an OOM as it exceeds the allocated memory
> {code}
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 104857600;
>  select count(*) from (select * from identical order by col1, col2, col3, 
> col4, col5, col6, col7, col8, col9, col10);
> Error: RESOURCE ERROR: One or more nodes ran out of memory while executing 
> the query.
> org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate sv2 
> buffer after repeated attempts
> Fragment 2:0
> [Error Id: aed43fa1-fd8b-4440-9426-0f35d055aabb on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Exception from the logs
> {code}
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
> nodes ran out of memory while executing the query.
> org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate sv2 
> buffer after repeated attempts
> [Error Id: aed43fa1-fd8b-4440-9426-0f35d055aabb ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:242)
>  [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_111]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: 
> org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate sv2 
> buffer after repeated attempts
> at 
> org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:371)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
>  ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
>  

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162368#comment-16162368
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r138236250
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java ---
@@ -92,18 +92,20 @@
 
   // Hash Aggregate Options
 
-  String HASHAGG_NUM_PARTITIONS = "drill.exec.hashagg.num_partitions";
   String HASHAGG_NUM_PARTITIONS_KEY = "exec.hashagg.num_partitions";
   LongValidator HASHAGG_NUM_PARTITIONS_VALIDATOR = new 
RangeLongValidator(HASHAGG_NUM_PARTITIONS_KEY, 1, 128); // 1 means - no spilling
-  String HASHAGG_MAX_MEMORY = "drill.exec.hashagg.mem_limit";
   String HASHAGG_MAX_MEMORY_KEY = "exec.hashagg.mem_limit";
   LongValidator HASHAGG_MAX_MEMORY_VALIDATOR = new 
RangeLongValidator(HASHAGG_MAX_MEMORY_KEY, 0, Integer.MAX_VALUE);
   // min batches is used for tuning (each partition needs so many batches 
when planning the number of partitions,
   // or reserve this number when calculating whether the remaining 
available memory is too small and requires a spill.)
   // Low value may OOM (e.g., when incoming rows become wider), higher 
values use fewer partitions but are safer
-  String HASHAGG_MIN_BATCHES_PER_PARTITION = 
"drill.exec.hashagg.min_batches_per_partition";
-  String HASHAGG_MIN_BATCHES_PER_PARTITION_KEY = 
"drill.exec.hashagg.min_batches_per_partition";
-  LongValidator HASHAGG_MIN_BATCHES_PER_PARTITION_VALIDATOR = new 
RangeLongValidator(HASHAGG_MIN_BATCHES_PER_PARTITION_KEY, 2, 5);
+  String HASHAGG_MIN_BATCHES_PER_PARTITION_KEY = 
"exec.hashagg.min_batches_per_partition";
+  LongValidator HASHAGG_MIN_BATCHES_PER_PARTITION_VALIDATOR = new 
RangeLongValidator(HASHAGG_MIN_BATCHES_PER_PARTITION_KEY, 1, 5);
+  // Can be turns off mainly for testing. Memory prediction is used to 
decide on when to spill to disk; with this option off,
--- End diff --

Done


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> 

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162370#comment-16162370
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r138236706
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
 ---
@@ -109,14 +107,21 @@
 
   private boolean isTwoPhase = false; // 1 phase or 2 phase aggr?
   private boolean is2ndPhase = false;
-  private boolean canSpill = true; // make it false in case can not spill
+  private boolean is1stPhase = false;
+  private boolean canSpill = true; // make it false in case can not 
spill/return-early
   private ChainedHashTable baseHashTable;
   private boolean earlyOutput = false; // when 1st phase returns a 
partition due to no memory
   private int earlyPartition = 0; // which partition to return early
-
-  private long memoryLimit; // max memory to be used by this oerator
-  private long estMaxBatchSize = 0; // used for adjusting #partitions
-  private long estRowWidth = 0;
+  private boolean retrySameIndex = false; // in case put failed during 1st 
phase - need to output early, then retry
--- End diff --

This is more for code readability -- "by default, this flag was chosen to 
be false".   


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> 

[jira] [Commented] (DRILL-5694) hash agg spill to disk, second phase OOM

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162369#comment-16162369
 ] 

ASF GitHub Bot commented on DRILL-5694:
---

Github user Ben-Zvi commented on a diff in the pull request:

https://github.com/apache/drill/pull/938#discussion_r138236560
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
 ---
@@ -293,7 +299,7 @@ private HashAggregator createAggregatorInternal() 
throws SchemaChangeException,
 aggrExprs,
 cgInner.getWorkspaceTypes(),
 groupByOutFieldIds,
-this.container);
+this.container, extraNonNullColumns * 8 /* sizeof(BigInt) */);
--- End diff --

Not sure  seemed to work OK in some (limited) testing.


> hash agg spill to disk, second phase OOM
> 
>
> Key: DRILL-5694
> URL: https://issues.apache.org/jira/browse/DRILL-5694
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Chun Chang
>Assignee: Boaz Ben-Zvi
>
> | 1.11.0-SNAPSHOT  | d622f76ee6336d97c9189fc589befa7b0f4189d6  | DRILL-5165: 
> For limit all case, no need to push down limit to scan  | 21.07.2017 @ 
> 10:36:29 PDT
> Second phase agg ran out of memory. Not suppose to. Test data currently only 
> accessible locally.
> /root/drill-test-framework/framework/resources/Advanced/hash-agg/spill/hagg15.q
> Query:
> select row_count, sum(row_count), avg(double_field), max(double_rand), 
> count(float_rand) from parquet_500m_v1 group by row_count order by row_count 
> limit 30
> Failed with exception
> java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory 
> while executing the query.
> HT was: 534773760 OOM at Second Phase. Partitions: 32. Estimated batch size: 
> 4849664. Planned batches: 0. Rows spilled so far: 6459928 Memory limit: 
> 536870912 so far allocated: 534773760.
> Fragment 1:6
> [Error Id: a193babd-f783-43da-a476-bb8dd4382420 on 10.10.30.168:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) HT was: 534773760 
> OOM at Second Phase. Partitions: 32. Estimated batch size: 4849664. Planned 
> batches: 0. Rows spilled so far: 6459928 Memory limit: 536870912 so far 
> allocated: 534773760.
> 
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.checkGroupAndAggrValues():1175
> org.apache.drill.exec.test.generated.HashAggregatorGen1823.doWork():539
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():168
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext():191
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745
>   Caused By (org.apache.drill.exec.exception.OutOfMemoryException) Unable to 
> allocate buffer of size 4194304 due to memory limit. Current allocation: 
> 534773760
> org.apache.drill.exec.memory.BaseAllocator.buffer():238
> org.apache.drill.exec.memory.BaseAllocator.buffer():213
> org.apache.drill.exec.vector.IntVector.allocateBytes():231
> 

[jira] [Closed] (DRILL-5519) Sort fails to spill and results in an OOM

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5519.
-

This has been verified.

> Sort fails to spill and results in an OOM
> -
>
> Key: DRILL-5519
> URL: https://issues.apache.org/jira/browse/DRILL-5519
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26e49afc-cf45-637b-acc1-a70fee7fe7e2.sys.drill, 
> drillbit.log, drillbit.out, drill-env.sh
>
>
> Setup :
> {code}
> git.commit.id.abbrev=1e0a14c
> DRILL_MAX_DIRECT_MEMORY="32G"
> DRILL_MAX_HEAP="4G"
> No of nodes in the drill cluster : 1
> {code}
> The below query fails with an OOM in the "in-memory sort" code, which means 
> the logic which decides when to spill is flawed.
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET 
> `exec.sort.disable_managed` = false;
> +---+-+
> |  ok   |   summary   |
> +---+-+
> | true  | exec.sort.disable_managed updated.  |
> +---+-+
> 1 row selected (1.022 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.memory.max_query_memory_per_node` = 334288000;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | planner.memory.max_query_memory_per_node updated.  |
> +---++
> 1 row selected (0.369 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from 
> (select flatten(flatten(lst_lst)) num from 
> dfs.`/drill/testdata/resource-manager/nested-large.json`) d order by d.num) 
> d1 where d1.num < -1;
> Error: RESOURCE ERROR: One or more nodes ran out of memory while executing 
> the query.
> Unable to allocate buffer of size 4194304 (rounded from 320) due to 
> memory limit. Current allocation: 16015936
> Fragment 2:2
> [Error Id: 4d9cc59a-b5d1-4ca9-9b26-69d9438f0bee on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Below is the exception from the logs
> {code}
> 2017-05-16 13:46:33,233 [26e49afc-cf45-637b-acc1-a70fee7fe7e2:frag:2:2] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes 
> ran out of memory while executing the query. (Unable to allocate buffer of 
> size 4194304 (rounded from 320) due to memory limit. Current allocation: 
> 16015936)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
> nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 4194304 (rounded from 320) due to 
> memory limit. Current allocation: 16015936
> [Error Id: 4d9cc59a-b5d1-4ca9-9b26-69d9438f0bee ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_111]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 4194304 (rounded from 320) due to memory limit. 
> Current allocation: 16015936
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:220) 
> ~[drill-memory-base-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:195) 
> ~[drill-memory-base-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.test.generated.MSorterGen44.setup(MSortTemplate.java:91)
>  ~[na:na]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.MergeSort.merge(MergeSort.java:110)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.sortInMemory(ExternalSortBatch.java:1159)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:687)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> 

[jira] [Closed] (DRILL-5465) Managed external sort results in an OOM

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5465.
-

This has been verified.

> Managed external sort results in an OOM
> ---
>
> Key: DRILL-5465
> URL: https://issues.apache.org/jira/browse/DRILL-5465
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26f7368e-21a1-6513-74ea-a178ae1e50f8.sys.drill, 
> createViewsParquet.sql, drillbit.log
>
>
> git.commit.id.abbrev=1e0a14c
> The below query fails with an OOM on top of Tpcds SF1 parquet data. Since the 
> sort already spilled once, I assume there is sufficient memory to handle the 
> spill/merge batches. The view definition file is attached and the data can be 
> downloaded from [1]
> {code}
> use dfs.tpcds_sf1_parquet_views;
> alter session set `planner.enable_decimal_data_type` = true;
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 200435456;
> alter session set `planner.enable_hashjoin` = false;
> SELECT dt.d_year,
>item.i_brand_id  brand_id,
>item.i_brand brand,
>Sum(ss_ext_discount_amt) sum_agg
> FROM   date_dim dt,
>store_sales,
>item
> WHERE  dt.d_date_sk = store_sales.ss_sold_date_sk
>AND store_sales.ss_item_sk = item.i_item_sk
>AND item.i_manufact_id = 427
>AND dt.d_moy = 11
> GROUP  BY dt.d_year,
>   item.i_brand,
>   item.i_brand_id
> ORDER  BY dt.d_year,
>   sum_agg DESC,
>   brand_id;
> {code}
> Exception from the logs
> {code}
> [Error Id: 676ff6ad-829d-4920-9d4f-5132601d27b4 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:617)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:425)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.RecordIterator.nextBatch(RecordIterator.java:99) 
> [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.RecordIterator.next(RecordIterator.java:185) 
> [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.RecordIterator.prepare(RecordIterator.java:169) 
> [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.join.JoinStatus.prepare(JoinStatus.java:87)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.join.MergeJoinBatch.innerNext(MergeJoinBatch.java:160)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  

[jira] [Closed] (DRILL-5447) Managed External Sort : Unable to allocate sv2 vector

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5447.
-

This has been verified.

> Managed External Sort : Unable to allocate sv2 vector
> -
>
> Key: DRILL-5447
> URL: https://issues.apache.org/jira/browse/DRILL-5447
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26550427-6adf-a52e-2ea8-dc52d8d8433f.sys.drill, 
> 26617a7e-b953-7ac3-556d-43fd88e51b19.sys.drill, 
> 26fee988-ed18-a86a-7164-3e75118c0ffc.sys.drill, drillbit.log, drillbit.log, 
> drillbit.log
>
>
> git.commit.id.abbrev=3e8b01d
> Dataset :
> {code}
> Every records contains a repeated type with 2000 elements. 
> The repeated type contains varchars of length 250 for the first 2000 records 
> and single character strings for the next 2000 records
> The above pattern is repeated a few types
> {code}
> The below query fails
> {code}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> select count(*) from (select * from (select id, flatten(str_list) str from 
> dfs.`/drill/testdata/resource-manager/flatten-large-small.json`) d order by 
> d.str) d1 where d1.id=0;
> Error: RESOURCE ERROR: Unable to allocate sv2 buffer
> Fragment 0:0
> [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 on qa-node190.qa.lab:31010] 
> (state=,code=0)
> {code}
> Exception from the logs
> {code}
> [Error Id: 9e45c293-ab26-489d-a90e-25da96004f15 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.newSV2(ExternalSortBatch.java:1463)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.makeSelectionVector(ExternalSortBatch.java:799)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.processBatch(ExternalSortBatch.java:856)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch(ExternalSortBatch.java:618)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:660)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:559)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  

[jira] [Closed] (DRILL-5445) Assertion Error in Managed External Sort when dealing with repeated maps

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5445.
-

This has been verified.

> Assertion Error in Managed External Sort when dealing with repeated maps
> 
>
> Key: DRILL-5445
> URL: https://issues.apache.org/jira/browse/DRILL-5445
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 27004a3c-c53d-52d1-c7ed-4beb563447f9.sys.drill, 
> drillbit.log
>
>
> git.commit.id.abbrev=3e8b01d
> The below query fails with an Assertion Error (I am running with assertions 
> enabled)
> {code}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 152428800;
> select count(*) from (
> select * from (
> select event_info.uid, transaction_info.trans_id, event_info.event.evnt_id
> from (
>  select userinfo.transaction.trans_id trans_id, 
> max(userinfo.event.event_time) max_event_time
>  from (
>  select uid, flatten(events) event, flatten(transactions) transaction 
> from dfs.`/drill/testdata/resource-manager/nested-large.json`
>  ) userinfo
>  where userinfo.transaction.trans_time >= userinfo.event.event_time
>  group by userinfo.transaction.trans_id
> ) transaction_info
> inner join
> (
>  select uid, flatten(events) event
>  from dfs.`/drill/testdata/resource-manager/nested-large.json`
> ) event_info
> on transaction_info.max_event_time = event_info.event.event_time) d order by 
> features[0].type) d1 where d1.uid < -1;
> {code}
> Below is the error from the logs
> {code}
> [Error Id: 26983344-dee3-4a33-8508-ad125f01fee6 on qa-node190.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_111]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111]
> Caused by: java.lang.RuntimeException: java.lang.AssertionError
> at 
> org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:101)
>  ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:409)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> ... 4 common frames omitted
> Caused by: java.lang.AssertionError: null
> at 
> org.apache.drill.exec.vector.complex.RepeatedMapVector.load(RepeatedMapVector.java:444)
>  ~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.cache.VectorAccessibleSerializable.readFromStream(VectorAccessibleSerializable.java:118)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.BatchGroup$SpilledRun.getBatch(BatchGroup.java:222)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.BatchGroup$SpilledRun.getNextIndex(BatchGroup.java:196)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen23.setup(PriorityQueueCopierTemplate.java:60)
>  ~[na:na]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.CopierHolder.createCopier(CopierHolder.java:116)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> org.apache.drill.exec.physical.impl.xsort.managed.CopierHolder.access$200(CopierHolder.java:45)
>  ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
> at 
> 

[jira] [Closed] (DRILL-5442) Managed Sort: IndexOutOfBounds with a join over an inlist

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5442.
-

I have verified this has been fixed.

> Managed Sort: IndexOutOfBounds with a join over an inlist
> -
>
> Key: DRILL-5442
> URL: https://issues.apache.org/jira/browse/DRILL-5442
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Boaz Ben-Zvi
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
>
> The following query fails with IOOB when a managed sort is used, but passes 
> with the old default sort:
> =
> 0: jdbc:drill:zk=local> alter session set `exec.sort.disable_managed` = false;
> +---+-+
> |  ok   |   summary   |
> +---+-+
> | true  | exec.sort.disable_managed updated.  |
> +---+-+
> 1 row selected (0.16 seconds)
> 0: jdbc:drill:zk=local> select * from dfs.`/data/json/s1/date_dim` where 
> d_year in(1990, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 
> 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919) limit 3;
> Error: SYSTEM ERROR: IndexOutOfBoundsException: index: 0, length: 1 
> (expected: range(0, 0))
> Fragment 0:0
> [Error Id: 370fd706-c365-421f-b57d-d6ab7fde82df on 10.250.56.251:31010] 
> (state=,code=0)
>  
> 
> (the above query was extracted from 
> /root/drillAutomation/framework-master/framework/resources/Functional/tpcds/variants/hive/q4_1.sql
>  )
> Note that the inlist must have at least 20 items, in which case the plan 
> becomes a join over a stream-aggregate over a sort over the (inlist's) 
> values. When the IOOB happens, the stack does not show the sort anymore, but 
> probably handling a NONE returned by the last next() on the sort ( 
> StreamingAggTemplate.doWork():182 ) 
> The "date_dim" can probably be made up with any data. The one above was taken 
> from:
> [root@atsqa6c85 ~]# hadoop fs -ls /drill/testdata/tpcds/json/s1/date_dim
> Found 1 items
> -rwxr-xr-x   3 root root   50713534 2014-10-14 22:39 
> /drill/testdata/tpcds/json/s1/date_dim/0_0_0.json



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector

2017-09-11 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162303#comment-16162303
 ] 

Paul Rogers commented on DRILL-5670:


>From the log:

{code}
Output batch size: net = 16,739,148 bytes, gross = 25,108,722 bytes, records = 
348
{code}

The above shows that the sort will return batches of 348 rows, with an expected 
memory footprint of around 25 MB per batch.

The sort then does its work:

{code}
Completed load phase: read 978 batches, spilled 194 times, total input bytes: 
49162399102
...
Starting merge phase. Runs = 46, Alloc. memory = 0
{code}

The above shows that the sort completed the sorting part of the work; it had 
moved onto returning batches downstream. We can confirm this because the next 
entry in the log is for schema setup in the selection vector remover -- 
something that happens on once the sort starts delivering results.

The log contains many entries of the form:

{code}
RemovingRecordBatch - doWork(): 348 records copied out of 348, remaining: 348
{code}

The above shows that the sort is, indeed, returning batches of 348 records, as 
it promised earlier. (Not sure why the SVR claims that 348 are "remaining.") 
The SVR emits many of these entries, suggesting it has processed many batches.

Later, this fragment is killed, likely because of the oversize allocation in 
the receiving fragment:

{code}
26498995-bbad-83bc-618f-914c37a84e1f:1:0: State change requested RUNNING --> 
FAILED
{code}

All of this suggests that the sort worked fine, but that the query died due to 
some other problem (likely in the exchange, as discussed in a prior note.)


> Varchar vector throws an assertion error when allocating a new vector
> -
>
> Key: DRILL-5670
> URL: https://issues.apache.org/jira/browse/DRILL-5670
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, 
> 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, 
> 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, 
> 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, 
> drillbit.log, drillbit.log, drillbit.log, drillbit.log.sort, drillbit.out, 
> drill-override.conf
>
>
> I am running this test on a private branch of [paul's 
> repository|https://github.com/paul-rogers/drill]. Below is the commit info
> {code}
> git.commit.id.abbrev=d86e16c
> git.commit.user.email=prog...@maprtech.com
> git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an 
> improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the 
> merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- 
> DRILL-5522\: OOM during the merge and spill process of the managed external 
> sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of 
> external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable 
> vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to 
> initialize the offset vector\n\nAll of the bugs have to do with handling 
> low-memory conditions, and with\ncorrectly estimating the sizes of vectors, 
> even when those vectors come\nfrom the spill file or from an exchange. Hence, 
> the changes for all of\nthe above issues are interrelated.\n
> git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an 
> improvements
> git.commit.user.name=Paul Rogers
> git.build.user.name=Rahul Challapalli
> git.commit.id.describe=0.9.0-1078-gd86e16c
> git.build.user.email=challapallira...@gmail.com
> git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.time=05.07.2017 @ 20\:34\:39 PDT
> git.build.time=12.07.2017 @ 14\:27\:03 PDT
> git.remote.origin.url=g...@github.com\:paul-rogers/drill.git
> {code}
> Below query fails with an Assertion Error
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET 
> `exec.sort.disable_managed` = false;
> +---+-+
> |  ok   |   summary   |
> +---+-+
> | true  | exec.sort.disable_managed updated.  |
> +---+-+
> 1 row selected (1.044 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.memory.max_query_memory_per_node` = 482344960;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | planner.memory.max_query_memory_per_node updated.  |
> 

[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162297#comment-16162297
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on the issue:

https://github.com/apache/drill/pull/923
  
@paul-rogers Finished apply review comments. PR is ready for review again.


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector

2017-09-11 Thread Robert Hou (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162289#comment-16162289
 ] 

Robert Hou commented on DRILL-5670:
---

I have attached drillbit.log.sort.  Can you confirm that sort has completed?

> Varchar vector throws an assertion error when allocating a new vector
> -
>
> Key: DRILL-5670
> URL: https://issues.apache.org/jira/browse/DRILL-5670
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, 
> 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, 
> 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, 
> 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, 
> drillbit.log, drillbit.log, drillbit.log, drillbit.log.sort, drillbit.out, 
> drill-override.conf
>
>
> I am running this test on a private branch of [paul's 
> repository|https://github.com/paul-rogers/drill]. Below is the commit info
> {code}
> git.commit.id.abbrev=d86e16c
> git.commit.user.email=prog...@maprtech.com
> git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an 
> improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the 
> merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- 
> DRILL-5522\: OOM during the merge and spill process of the managed external 
> sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of 
> external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable 
> vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to 
> initialize the offset vector\n\nAll of the bugs have to do with handling 
> low-memory conditions, and with\ncorrectly estimating the sizes of vectors, 
> even when those vectors come\nfrom the spill file or from an exchange. Hence, 
> the changes for all of\nthe above issues are interrelated.\n
> git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an 
> improvements
> git.commit.user.name=Paul Rogers
> git.build.user.name=Rahul Challapalli
> git.commit.id.describe=0.9.0-1078-gd86e16c
> git.build.user.email=challapallira...@gmail.com
> git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.time=05.07.2017 @ 20\:34\:39 PDT
> git.build.time=12.07.2017 @ 14\:27\:03 PDT
> git.remote.origin.url=g...@github.com\:paul-rogers/drill.git
> {code}
> Below query fails with an Assertion Error
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET 
> `exec.sort.disable_managed` = false;
> +---+-+
> |  ok   |   summary   |
> +---+-+
> | true  | exec.sort.disable_managed updated.  |
> +---+-+
> 1 row selected (1.044 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.memory.max_query_memory_per_node` = 482344960;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | planner.memory.max_query_memory_per_node updated.  |
> +---++
> 1 row selected (0.372 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_node` = 1;
> +---+--+
> |  ok   |   summary|
> +---+--+
> | true  | planner.width.max_per_node updated.  |
> +---+--+
> 1 row selected (0.292 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_query` = 1;
> +---+---+
> |  ok   |summary|
> +---+---+
> | true  | planner.width.max_per_query updated.  |
> +---+---+
> 1 row selected (0.25 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from 
> dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by 
> columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50],
>  
> columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520],
>  columns[1410], 
> 

[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou updated DRILL-5670:
--
Attachment: drillbit.log.sort

> Varchar vector throws an assertion error when allocating a new vector
> -
>
> Key: DRILL-5670
> URL: https://issues.apache.org/jira/browse/DRILL-5670
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, 
> 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, 
> 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, 
> 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, 
> drillbit.log, drillbit.log, drillbit.log, drillbit.log.sort, drillbit.out, 
> drill-override.conf
>
>
> I am running this test on a private branch of [paul's 
> repository|https://github.com/paul-rogers/drill]. Below is the commit info
> {code}
> git.commit.id.abbrev=d86e16c
> git.commit.user.email=prog...@maprtech.com
> git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an 
> improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the 
> merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- 
> DRILL-5522\: OOM during the merge and spill process of the managed external 
> sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of 
> external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable 
> vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to 
> initialize the offset vector\n\nAll of the bugs have to do with handling 
> low-memory conditions, and with\ncorrectly estimating the sizes of vectors, 
> even when those vectors come\nfrom the spill file or from an exchange. Hence, 
> the changes for all of\nthe above issues are interrelated.\n
> git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an 
> improvements
> git.commit.user.name=Paul Rogers
> git.build.user.name=Rahul Challapalli
> git.commit.id.describe=0.9.0-1078-gd86e16c
> git.build.user.email=challapallira...@gmail.com
> git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.time=05.07.2017 @ 20\:34\:39 PDT
> git.build.time=12.07.2017 @ 14\:27\:03 PDT
> git.remote.origin.url=g...@github.com\:paul-rogers/drill.git
> {code}
> Below query fails with an Assertion Error
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET 
> `exec.sort.disable_managed` = false;
> +---+-+
> |  ok   |   summary   |
> +---+-+
> | true  | exec.sort.disable_managed updated.  |
> +---+-+
> 1 row selected (1.044 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.memory.max_query_memory_per_node` = 482344960;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | planner.memory.max_query_memory_per_node updated.  |
> +---++
> 1 row selected (0.372 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_node` = 1;
> +---+--+
> |  ok   |   summary|
> +---+--+
> | true  | planner.width.max_per_node updated.  |
> +---+--+
> 1 row selected (0.292 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_query` = 1;
> +---+---+
> |  ok   |summary|
> +---+---+
> | true  | planner.width.max_per_query updated.  |
> +---+---+
> 1 row selected (0.25 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from 
> dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by 
> columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50],
>  
> columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520],
>  columns[1410], 
> columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350],
>  
> 

[jira] [Updated] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou updated DRILL-5670:
--
Attachment: drillbit.log
26498995-bbad-83bc-618f-914c37a84e1f.sys.drill

> Varchar vector throws an assertion error when allocating a new vector
> -
>
> Key: DRILL-5670
> URL: https://issues.apache.org/jira/browse/DRILL-5670
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Rahul Challapalli
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26498995-bbad-83bc-618f-914c37a84e1f.sys.drill, 
> 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, 
> 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, 
> 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, 
> drillbit.log, drillbit.log, drillbit.log, drillbit.out, drill-override.conf
>
>
> I am running this test on a private branch of [paul's 
> repository|https://github.com/paul-rogers/drill]. Below is the commit info
> {code}
> git.commit.id.abbrev=d86e16c
> git.commit.user.email=prog...@maprtech.com
> git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an 
> improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the 
> merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- 
> DRILL-5522\: OOM during the merge and spill process of the managed external 
> sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of 
> external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable 
> vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to 
> initialize the offset vector\n\nAll of the bugs have to do with handling 
> low-memory conditions, and with\ncorrectly estimating the sizes of vectors, 
> even when those vectors come\nfrom the spill file or from an exchange. Hence, 
> the changes for all of\nthe above issues are interrelated.\n
> git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an 
> improvements
> git.commit.user.name=Paul Rogers
> git.build.user.name=Rahul Challapalli
> git.commit.id.describe=0.9.0-1078-gd86e16c
> git.build.user.email=challapallira...@gmail.com
> git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.time=05.07.2017 @ 20\:34\:39 PDT
> git.build.time=12.07.2017 @ 14\:27\:03 PDT
> git.remote.origin.url=g...@github.com\:paul-rogers/drill.git
> {code}
> Below query fails with an Assertion Error
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET 
> `exec.sort.disable_managed` = false;
> +---+-+
> |  ok   |   summary   |
> +---+-+
> | true  | exec.sort.disable_managed updated.  |
> +---+-+
> 1 row selected (1.044 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.memory.max_query_memory_per_node` = 482344960;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | planner.memory.max_query_memory_per_node updated.  |
> +---++
> 1 row selected (0.372 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_node` = 1;
> +---+--+
> |  ok   |   summary|
> +---+--+
> | true  | planner.width.max_per_node updated.  |
> +---+--+
> 1 row selected (0.292 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_query` = 1;
> +---+---+
> |  ok   |summary|
> +---+---+
> | true  | planner.width.max_per_query updated.  |
> +---+---+
> 1 row selected (0.25 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from 
> dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by 
> columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50],
>  
> columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[],columns[30],columns[2420],columns[1520],
>  columns[1410], 
> 

[jira] [Commented] (DRILL-5670) Varchar vector throws an assertion error when allocating a new vector

2017-09-11 Thread Robert Hou (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162228#comment-16162228
 ] 

Robert Hou commented on DRILL-5670:
---

I am getting a different error now:
{noformat}
2017-09-11 06:23:17,297 [BitServer-3] DEBUG o.a.drill.exec.work.foreman.Foreman 
- 26498995-bbad-83bc-618f-914c37a84e1f: State change requested RUNNING --> 
FAILED
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
OversizedAllocationException: Unable to expand the buffer. Max allowed buffer 
size is reached.

Fragment 1:0

[Error Id: 2f6ad792-9160-487e-9dbe-0d54ec53d0ae on atsqa6c86.qa.lab:31010]

  (org.apache.drill.exec.exception.OversizedAllocationException) Unable to 
expand the buffer. Max allowed buffer size is reached.
org.apache.drill.exec.vector.VarCharVector.reAlloc():425
org.apache.drill.exec.vector.VarCharVector$Mutator.setSafe():623
org.apache.drill.exec.vector.RepeatedVarCharVector$Mutator.addSafe():374
org.apache.drill.exec.vector.RepeatedVarCharVector$Mutator.addSafe():365
org.apache.drill.exec.vector.RepeatedVarCharVector.copyFromSafe():220

org.apache.drill.exec.test.generated.MergingReceiverGeneratorBaseGen584.doCopy():343

org.apache.drill.exec.physical.impl.mergereceiver.MergingRecordBatch.copyRecordToOutgoingBatch():721

org.apache.drill.exec.physical.impl.mergereceiver.MergingRecordBatch.innerNext():360
org.apache.drill.exec.record.AbstractRecordBatch.next():164
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():109
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
org.apache.drill.exec.record.AbstractRecordBatch.next():164
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():109
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
org.apache.drill.exec.record.AbstractRecordBatch.next():164
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():109
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51

org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
org.apache.drill.exec.record.AbstractRecordBatch.next():164
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():109
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133
org.apache.drill.exec.record.AbstractRecordBatch.next():164
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():109

org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():151
org.apache.drill.exec.record.AbstractRecordBatch.next():164
org.apache.drill.exec.physical.impl.BaseRootExec.next():105

org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
org.apache.drill.exec.physical.impl.BaseRootExec.next():95
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
java.security.AccessController.doPrivileged():-2
javax.security.auth.Subject.doAs():415
org.apache.hadoop.security.UserGroupInformation.doAs():1595
org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():744
at 
org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:94)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.batch.ControlMessageHandler.handle(ControlMessageHandler.java:55)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:157) 
[drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at org.apache.drill.exec.rpc.BasicServer.handle(BasicServer.java:53) 
[drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
[drill-rpc-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 

[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162117#comment-16162117
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138205470
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/server/TestOptions.java ---
@@ -56,7 +56,7 @@ public void checkChangedColumn() throws Exception {
 test("ALTER session SET `%s` = %d;", SLICE_TARGET,
   ExecConstants.SLICE_TARGET_DEFAULT);
 testBuilder()
-.sqlQuery("SELECT status FROM sys.options WHERE name = '%s' AND 
type = 'SESSION'", SLICE_TARGET)
+.sqlQuery("SELECT status FROM sys.options WHERE name = '%s' AND 
optionScope = 'SESSION'", SLICE_TARGET)
--- End diff --

Taking a note of what we discussed offline. 

Course of action is to just change the name for **type** since it is a bad 
name, and the semantics of **type** have been ill-defined. So no one could have 
relied on the values returned to them in the past anyway. I am changing the 
name to **accessibleScopes**.


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5761) Disable Lilith ClassicMultiplexSocketAppender by default

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162081#comment-16162081
 ] 

ASF GitHub Bot commented on DRILL-5761:
---

Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/930#discussion_r138200345
  
--- Diff: common/src/test/resources/logback-test.xml ---
@@ -0,0 +1,111 @@
+
+
+
+
+  
+
+  
+true
+1
+true
+${LILITH_HOSTNAME:-localhost}
+  
+
+  
+
+  
+
+
+  %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - 
%msg%n
+
+  
+
+  
+
+  
+
--- End diff --

OK.


> Disable Lilith ClassicMultiplexSocketAppender by default
> 
>
> Key: DRILL-5761
> URL: https://issues.apache.org/jira/browse/DRILL-5761
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> When running unit tests on the node where Hiveserver2 service is running, 
> tests run hangs in the middle. Jstack shows that some threads are waiting for 
> a condition.
> {noformat}
> Full thread dump
> "main" prio=10 tid=0x7f0998009800 nid=0x17f7 waiting on condition 
> [0x7f09a0c6d000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00076004ebf0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:324)
>   at 
> de.huxhorn.lilith.sender.MultiplexSendBytesService.sendBytes(MultiplexSendBytesService.java:132)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.sendBytes(MultiplexSocketAppenderBase.java:336)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.append(MultiplexSocketAppenderBase.java:348)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:272)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:259)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:441)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:395)
>   at ch.qos.logback.classic.Logger.error(Logger.java:558)
>   at 
> org.apache.drill.test.DrillTest$TestLogReporter.failed(DrillTest.java:153)
>   at org.junit.rules.TestWatcher.failedQuietly(TestWatcher.java:84)
>   at org.junit.rules.TestWatcher.access$300(TestWatcher.java:46)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:62)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runners.Suite.runChild(Suite.java:127)
>   at org.junit.runners.Suite.runChild(Suite.java:26)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at 

[jira] [Commented] (DRILL-5377) Five-digit year dates are displayed incorrectly via jdbc

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161978#comment-16161978
 ] 

ASF GitHub Bot commented on DRILL-5377:
---

Github user vdiravka commented on the issue:

https://github.com/apache/drill/pull/916
  
@paul-rogers 

There is no bug with corrupt Parquet dates, it was fixed in context of 
DRILL-4203. 

This commit fixes representing of the five digit year dates and doesn't 
change logic for the 4(3,2...) digit year dates. It is made in similar manner 
as TimePrintMillis.

But the best solution is to use necessary formatting. I am working on this, 
so this PR can be closed.


> Five-digit year dates are displayed incorrectly via jdbc
> 
>
> Key: DRILL-5377
> URL: https://issues.apache.org/jira/browse/DRILL-5377
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Affects Versions: 1.10.0
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> git.commit.id.abbrev=38ef562
> The issue is connected to displaying five-digit year dates via jdbc
> Below is the output, I get from test framework when I disable auto correction 
> for date fields
> {code}
> select l_shipdate from table(cp.`tpch/lineitem.parquet` (type => 'parquet', 
> autoCorrectCorruptDates => false)) order by l_shipdate limit 10;
> ^@356-03-19
> ^@356-03-21
> ^@356-03-21
> ^@356-03-23
> ^@356-03-24
> ^@356-03-24
> ^@356-03-26
> ^@356-03-26
> ^@356-03-26
> ^@356-03-26
> {code}
> Or a simpler case:
> {code}
> 0: jdbc:drill:> select cast('11356-02-16' as date) as FUTURE_DATE from 
> (VALUES(1));
> +--+
> | FUTURE_DATE  |
> +--+
> | 356-02-16   |
> +--+
> 1 row selected (0.293 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161937#comment-16161937
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138182296
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/BaseOptionManager.java
 ---
@@ -17,44 +17,84 @@
  */
 package org.apache.drill.exec.server.options;
 
-import 
org.apache.drill.exec.server.options.TypeValidators.BooleanValidator;
-import org.apache.drill.exec.server.options.TypeValidators.DoubleValidator;
-import org.apache.drill.exec.server.options.TypeValidators.LongValidator;
-import org.apache.drill.exec.server.options.TypeValidators.StringValidator;
-
-public abstract class BaseOptionManager implements OptionSet {
-//  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(BaseOptionManager.class);
-
-  /**
-   * Gets the current option value given a validator.
-   *
-   * @param validator the validator
-   * @return option value
-   * @throws IllegalArgumentException - if the validator is not found
-   */
-  private OptionValue getOptionSafe(OptionValidator validator)  {
-OptionValue value = getOption(validator.getOptionName());
-return value == null ? validator.getDefault() : value;
+import org.apache.drill.common.exceptions.UserException;
+
+import java.util.Iterator;
+
+/**
+ * This {@link OptionManager} implements some the basic methods and should 
be extended by concrete implementations.
+ */
+public abstract class BaseOptionManager extends BaseOptionSet implements 
OptionManager {
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(BaseOptionManager.class);
+
+  @Override
+  public OptionList getInternalOptionList() {
+return getAllOptionList(true);
   }
 
   @Override
-  public boolean getOption(BooleanValidator validator) {
-return getOptionSafe(validator).bool_val;
+  public OptionList getPublicOptionList() {
+return getAllOptionList(false);
   }
 
   @Override
-  public double getOption(DoubleValidator validator) {
-return getOptionSafe(validator).float_val;
+  public void setLocalOption(String name, boolean value) {
+setLocalOption(OptionValue.Kind.BOOLEAN, name, 
Boolean.toString(value));
--- End diff --

Done.


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5753.
-

I have verified that this has been fixed.

> Managed External Sort: One or more nodes ran out of memory while executing 
> the query.
> -
>
> Key: DRILL-5753
> URL: https://issues.apache.org/jira/browse/DRILL-5753
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, 
> drillbit.log
>
>
> The query is:
> {noformat}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.memory.max_query_memory_per_node` = 1252428800;
> select count(*) from (
>   select * from (
> select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid 
> from (
>   select d.type type, d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid
> ) s1
>   ) s2
>   order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist
> );
> ALTER SESSION SET `exec.sort.disable_managed` = true;
> alter session set `planner.memory.max_query_memory_per_node` = 2147483648;
> {noformat}
> The stack trace is:
> {noformat}
> 2017-08-30 03:35:10,479 [BitServer-5] DEBUG 
> o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: 
> State change requested RUNNING --> FAILED
> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One 
> or more nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 4194304 due to memory limit. Current 
> allocation: 43960640
> Fragment 2:9
> [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate 
> buffer of size 4194304 due to memory limit. Current allocation: 43960640
> org.apache.drill.exec.memory.BaseAllocator.buffer():238
> org.apache.drill.exec.memory.BaseAllocator.buffer():213
> org.apache.drill.exec.vector.BigIntVector.reAlloc():252
> org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452
> org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355
> org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220
> 
> org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202
> 
> org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225
> 
> org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225
> org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82
> 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77
> 
> org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():744
> at 
> org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]

[jira] [Resolved] (DRILL-5753) Managed External Sort: One or more nodes ran out of memory while executing the query.

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou resolved DRILL-5753.
---
Resolution: Fixed

> Managed External Sort: One or more nodes ran out of memory while executing 
> the query.
> -
>
> Key: DRILL-5753
> URL: https://issues.apache.org/jira/browse/DRILL-5753
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 26596b4e-9883-7dc2-6275-37134f7d63be.sys.drill, 
> drillbit.log
>
>
> The query is:
> {noformat}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.memory.max_query_memory_per_node` = 1252428800;
> select count(*) from (
>   select * from (
> select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid 
> from (
>   select d.type type, d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid
> ) s1
>   ) s2
>   order by s2.rms.mapid, s2.rptds.a, s2.rptds.do_not_exist
> );
> ALTER SESSION SET `exec.sort.disable_managed` = true;
> alter session set `planner.memory.max_query_memory_per_node` = 2147483648;
> {noformat}
> The stack trace is:
> {noformat}
> 2017-08-30 03:35:10,479 [BitServer-5] DEBUG 
> o.a.drill.exec.work.foreman.Foreman - 26596b4e-9883-7dc2-6275-37134f7d63be: 
> State change requested RUNNING --> FAILED
> org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One 
> or more nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 4194304 due to memory limit. Current 
> allocation: 43960640
> Fragment 2:9
> [Error Id: f58210a2-7569-42d0-8961-8c7e42c7fea3 on atsqa6c80.qa.lab:31010]
>   (org.apache.drill.exec.exception.OutOfMemoryException) Unable to allocate 
> buffer of size 4194304 due to memory limit. Current allocation: 43960640
> org.apache.drill.exec.memory.BaseAllocator.buffer():238
> org.apache.drill.exec.memory.BaseAllocator.buffer():213
> org.apache.drill.exec.vector.BigIntVector.reAlloc():252
> org.apache.drill.exec.vector.BigIntVector$Mutator.setSafe():452
> org.apache.drill.exec.vector.RepeatedBigIntVector$Mutator.addSafe():355
> org.apache.drill.exec.vector.RepeatedBigIntVector.copyFromSafe():220
> 
> org.apache.drill.exec.vector.RepeatedBigIntVector$TransferImpl.copyValueSafe():202
> 
> org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225
> 
> org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():225
> org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():82
> 
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.doCopy():47
> org.apache.drill.exec.test.generated.PriorityQueueCopierGen1466.next():77
> 
> org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():267
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():374
> 
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():303
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
> org.apache.drill.exec.record.AbstractRecordBatch.next():164
> org.apache.drill.exec.physical.impl.BaseRootExec.next():105
> 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
> org.apache.drill.exec.physical.impl.BaseRootExec.next():95
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():744
> at 
> org.apache.drill.exec.work.foreman.QueryManager$1.statusUpdate(QueryManager.java:521)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.rpc.control.WorkEventBus.statusUpdate(WorkEventBus.java:71)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> 

[jira] [Closed] (DRILL-5744) External sort fails with OOM error

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou closed DRILL-5744.
-

> External sort fails with OOM error
> --
>
> Key: DRILL-5744
> URL: https://issues.apache.org/jira/browse/DRILL-5744
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Robert Hou
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 265b163b-cf44-d2ff-2e70-4cd746b56611.sys.drill, 
> q34.drillbit.log
>
>
> Query is:
> {noformat}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 152428800;
> select count(*) from (
>   select * from (
> select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid 
> from (
>   select d.type type, d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid
> ) s1
>   ) s2
>   order by s2.rms.mapid
> );
> ALTER SESSION SET `exec.sort.disable_managed` = true;
> alter session set `planner.width.max_per_node` = 17;
> alter session set `planner.disable_exchanges` = false;
> alter session set `planner.width.max_per_query` = 1000;
> alter session set `planner.memory.max_query_memory_per_node` = 2147483648;
> {noformat}
> Stack trace is:
> {noformat}
> 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes
>  ran out of memory while executing the query. (Unable to allocate buffer of 
> size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7
> 9986944)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
> nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 4194304 (rounded from 3276750) due to 
> memory limit. Current allocation: 79986944
> [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
>   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. 
> Cur
> rent allocation: 79986944
>   at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS
> HOT]
>   at 
> org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT
> ]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>

[jira] [Resolved] (DRILL-5744) External sort fails with OOM error

2017-09-11 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou resolved DRILL-5744.
---
Resolution: Fixed

This has been verified.

> External sort fails with OOM error
> --
>
> Key: DRILL-5744
> URL: https://issues.apache.org/jira/browse/DRILL-5744
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.10.0
>Reporter: Robert Hou
>Assignee: Paul Rogers
> Fix For: 1.12.0
>
> Attachments: 265b163b-cf44-d2ff-2e70-4cd746b56611.sys.drill, 
> q34.drillbit.log
>
>
> Query is:
> {noformat}
> ALTER SESSION SET `exec.sort.disable_managed` = false;
> alter session set `planner.width.max_per_node` = 1;
> alter session set `planner.disable_exchanges` = true;
> alter session set `planner.width.max_per_query` = 1;
> alter session set `planner.memory.max_query_memory_per_node` = 152428800;
> select count(*) from (
>   select * from (
> select s1.type type, flatten(s1.rms.rptd) rptds, s1.rms, s1.uid 
> from (
>   select d.type type, d.uid uid, flatten(d.map.rm) rms from 
> dfs.`/drill/testdata/resource-manager/nested-large.json` d order by d.uid
> ) s1
>   ) s2
>   order by s2.rms.mapid
> );
> ALTER SESSION SET `exec.sort.disable_managed` = true;
> alter session set `planner.width.max_per_node` = 17;
> alter session set `planner.disable_exchanges` = false;
> alter session set `planner.width.max_per_query` = 1000;
> alter session set `planner.memory.max_query_memory_per_node` = 2147483648;
> {noformat}
> Stack trace is:
> {noformat}
> 2017-08-23 06:59:42,763 [266275e5-ebdb-14ae-d52d-00fa3a154f6d:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes
>  ran out of memory while executing the query. (Unable to allocate buffer of 
> size 4194304 (rounded from 3276750) due to memory limit. Current allocation: 7
> 9986944)
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
> nodes ran out of memory while executing the query.
> Unable to allocate buffer of size 4194304 (rounded from 3276750) due to 
> memory limit. Current allocation: 79986944
> [Error Id: 4f4959df-0921-4a50-b75e-56488469ab10 ]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
>   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
> allocate buffer of size 4194304 (rounded from 3276750) due to memory limit. 
> Cur
> rent allocation: 79986944
>   at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:236)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:33)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPS
> HOT]
>   at 
> org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46)
>  ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:113)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT
> ]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateMap(VectorInitializer.java:130)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:93)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85)
>  

[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161866#comment-16161866
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138173050
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/BaseOptionManager.java
 ---
@@ -17,44 +17,84 @@
  */
 package org.apache.drill.exec.server.options;
 
-import 
org.apache.drill.exec.server.options.TypeValidators.BooleanValidator;
-import org.apache.drill.exec.server.options.TypeValidators.DoubleValidator;
-import org.apache.drill.exec.server.options.TypeValidators.LongValidator;
-import org.apache.drill.exec.server.options.TypeValidators.StringValidator;
-
-public abstract class BaseOptionManager implements OptionSet {
-//  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(BaseOptionManager.class);
-
-  /**
-   * Gets the current option value given a validator.
-   *
-   * @param validator the validator
-   * @return option value
-   * @throws IllegalArgumentException - if the validator is not found
-   */
-  private OptionValue getOptionSafe(OptionValidator validator)  {
-OptionValue value = getOption(validator.getOptionName());
-return value == null ? validator.getDefault() : value;
+import org.apache.drill.common.exceptions.UserException;
+
+import java.util.Iterator;
+
+/**
+ * This {@link OptionManager} implements some the basic methods and should 
be extended by concrete implementations.
+ */
+public abstract class BaseOptionManager extends BaseOptionSet implements 
OptionManager {
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(BaseOptionManager.class);
+
+  @Override
+  public OptionList getInternalOptionList() {
+return getAllOptionList(true);
   }
 
   @Override
-  public boolean getOption(BooleanValidator validator) {
-return getOptionSafe(validator).bool_val;
+  public OptionList getPublicOptionList() {
+return getAllOptionList(false);
   }
 
   @Override
-  public double getOption(DoubleValidator validator) {
-return getOptionSafe(validator).float_val;
+  public void setLocalOption(String name, boolean value) {
+setLocalOption(OptionValue.Kind.BOOLEAN, name, 
Boolean.toString(value));
   }
 
   @Override
-  public long getOption(LongValidator validator) {
-return getOptionSafe(validator).num_val;
+  public void setLocalOption(String name, long value) {
+setLocalOption(OptionValue.Kind.LONG, name, Long.toString(value));
   }
 
   @Override
-  public String getOption(StringValidator validator) {
-return getOptionSafe(validator).string_val;
+  public void setLocalOption(String name, double value) {
+setLocalOption(OptionValue.Kind.DOUBLE, name, Double.toString(value));
   }
 
+  @Override
+  public void setLocalOption(String name, String value) {
+setLocalOption(OptionValue.Kind.STRING, name, value);
+  }
+
+  @Override
+  public void setLocalOption(OptionValue.Kind kind, String name, String 
value) {
+final OptionDefinition definition = getOptionDefinition(name);
--- End diff --

Yeah it does. I've updated the javadoc to make this explicit.


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The 

[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161859#comment-16161859
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138171404
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/DrillConfigIterator.java
 ---
@@ -58,17 +58,17 @@ public OptionValue next() {
   OptionValue optionValue = null;
   switch(cv.valueType()) {
   case BOOLEAN:
-optionValue = OptionValue.createBoolean(OptionType.BOOT, name, 
(Boolean) cv.unwrapped(), OptionScope.BOOT);
+optionValue = OptionValue.create(OptionType.BOOT, name, (Boolean) 
cv.unwrapped(), OptionScope.BOOT);
--- End diff --

Agreed


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161845#comment-16161845
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138168825
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/OptionManager.java
 ---
@@ -17,49 +17,97 @@
  */
 package org.apache.drill.exec.server.options;
 
-import org.apache.drill.exec.server.options.OptionValue.OptionType;
+import javax.validation.constraints.NotNull;
 
 /**
  * Manager for Drill {@link OptionValue options}. Implementations must be 
case-insensitive to the name of an option.
  */
 public interface OptionManager extends OptionSet, Iterable {
 
   /**
-   * Sets an option value.
-   *
-   * @param value option value
-   * @throws org.apache.drill.common.exceptions.UserException message to 
describe error with value
+   * Sets a boolean option on the {@link OptionManager}.
+   * @param name The name of the option.
+   * @param value The value of the option.
*/
-  void setOption(OptionValue value);
+  void setLocalOption(String name, boolean value);
 
   /**
-   * Deletes the option. Unfortunately, the type is required given the 
fallback structure of option managers.
-   * See {@link FallbackOptionManager}.
+   * Sets a long option on the {@link OptionManager}.
+   * @param name The name of the option.
+   * @param value The value of the option.
+   */
+  void setLocalOption(String name, long value);
+
+  /**
+   * Sets a double option on the {@link OptionManager}.
+   * @param name The name of the option.
+   * @param value The value of the option.
+   */
+  void setLocalOption(String name, double value);
+
+  /**
+   * Sets a String option on the {@link OptionManager}.
+   * @param name The name of the option.
+   * @param value The value of the option.
+   */
+  void setLocalOption(String name, String value);
+
+  /**
+   * Sets an option of the specified {@link OptionValue.Kind} on the 
{@link OptionManager}.
+   * @param kind The kind of the option.
+   * @param name The name of the option.
+   * @param value The value of the option.
+   */
+  void setLocalOption(OptionValue.Kind kind, String name, String value);
+
+  /**
+   * Deletes the option.
*
-   * If the option name is valid (exists in {@link 
SystemOptionManager#VALIDATORS}),
+   * If the option name is valid (exists in the set of validators produced 
by {@link SystemOptionManager#createDefaultOptionDefinitions()}),
* but the option was not set within this manager, calling this method 
should be a no-op.
*
* @param name option name
-   * @param type option type
* @throws org.apache.drill.common.exceptions.UserException message to 
describe error with value
*/
-  void deleteOption(String name, OptionType type);
+  void deleteLocalOption(String name);
 
   /**
-   * Deletes all options. Unfortunately, the type is required given the 
fallback structure of option managers.
-   * See {@link FallbackOptionManager}.
+   * Deletes all options.
*
* If no options are set, calling this method should be no-op.
*
-   * @param type option type
* @throws org.apache.drill.common.exceptions.UserException message to 
describe error with value
*/
-  void deleteAllOptions(OptionType type);
+  void deleteAllLocalOptions();
+
+  /**
+   * Get the option definition corresponding to the given option name.
+   * @param name The name of the option to retrieve a validator for.
+   * @return The option validator corresponding to the given option name.
+   */
+  @NotNull
+  OptionDefinition getOptionDefinition(String name);
 
   /**
* Gets the list of options managed this manager.
*
* @return the list of options
*/
   OptionList getOptionList();
+
+  /**
+   * Returns all the internal options contained in this option manager.
+   *
+   * @return All the internal options contained in this option manager.
+   */
+  @NotNull
+  OptionList getInternalOptionList();
--- End diff --

Internal and Local are not the same thing. Local means that the value for 
the option is stored in this option manager. Internal means that it is not 
visible with the rest of the options unless you look in the special 
sys.internal_options table. I will add this distinction to the javadoc.


> Support System/Session Internal Options And Additional Option System Fixes
> 

[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161840#comment-16161840
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138168189
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/OptionMetaData.java
 ---
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.server.options;
+
+/**
+ * Contains information about the scopes in which an option can be set, 
and an option's visibility.
+ */
+public class OptionMetaData {
+  public static final OptionMetaData DEFAULT = new 
OptionMetaData(OptionValue.OptionType.ALL, false, false);
+
+  private OptionValue.OptionType type;
--- End diff --

Done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161839#comment-16161839
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138168177
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/OptionMetaData.java
 ---
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.server.options;
+
+/**
+ * Contains information about the scopes in which an option can be set, 
and an option's visibility.
+ */
+public class OptionMetaData {
+  public static final OptionMetaData DEFAULT = new 
OptionMetaData(OptionValue.OptionType.ALL, false, false);
+
+  private OptionValue.OptionType type;
+  private boolean adminOption;
--- End diff --

Done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161835#comment-16161835
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138167788
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/OptionValue.java
 ---
@@ -63,32 +88,32 @@
   public final Double float_val;
   public final OptionScope scope;
 
-  public static OptionValue createLong(OptionType type, String name, long 
val, OptionScope scope) {
+  public static OptionValue create(OptionType type, String name, long val, 
OptionScope scope) {
 return new OptionValue(Kind.LONG, type, name, val, null, null, null, 
scope);
   }
 
-  public static OptionValue createBoolean(OptionType type, String name, 
boolean bool, OptionScope scope) {
+  public static OptionValue create(OptionType type, String name, boolean 
bool, OptionScope scope) {
 return new OptionValue(Kind.BOOLEAN, type, name, null, null, bool, 
null, scope);
   }
 
-  public static OptionValue createString(OptionType type, String name, 
String val, OptionScope scope) {
+  public static OptionValue create(OptionType type, String name, String 
val, OptionScope scope) {
 return new OptionValue(Kind.STRING, type, name, null, val, null, null, 
scope);
   }
 
-  public static OptionValue createDouble(OptionType type, String name, 
double val, OptionScope scope) {
+  public static OptionValue create(OptionType type, String name, double 
val, OptionScope scope) {
 return new OptionValue(Kind.DOUBLE, type, name, null, null, null, val, 
scope);
   }
 
-  public static OptionValue createOption(Kind kind, OptionType type, 
String name, String val, OptionScope scope) {
+  public static OptionValue create(Kind kind, OptionType type, String 
name, String val, OptionScope scope) {
--- End diff --

Done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5002) Using hive's date functions on top of date column gives wrong results for local time-zone

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161763#comment-16161763
 ] 

ASF GitHub Bot commented on DRILL-5002:
---

Github user vdiravka commented on a diff in the pull request:

https://github.com/apache/drill/pull/937#discussion_r138133454
  
--- Diff: 
contrib/storage-hive/core/src/main/codegen/templates/ObjectInspectorHelper.java 
---
@@ -204,7 +204,11 @@ public static JBlock getDrillObject(JCodeModel m, 
ObjectInspector oi,
   <#elseif entry.hiveType == "TIMESTAMP">
 JVar tsVar = 
jc._else().decl(m.directClass(java.sql.Timestamp.class.getCanonicalName()), 
"ts",
   castedOI.invoke("getPrimitiveJavaObject").arg(returnValue));
-jc._else().assign(returnValueHolder.ref("value"), 
tsVar.invoke("getTime"));
+// Bringing relative timestamp value without timezone info to 
timestamp value in UTC, since Drill keeps date-time values in UTC
--- End diff --

I'm not fully agreed. Let me explain:

Let's take for example TimeStampVector, Drill keeps date-time values in 
DrillBuf as millis from epoch. Then timestamp values are extracted via 
[getObject()](https://github.com/apache/drill/blob/1c09c2f13bd0f50ca40c17dc0bfa7aae5826b8c3/exec/java-exec/src/main/codegen/templates/FixedValueVectors.java#L446)
 method (I agree that the code in this method is questionable, but it is not 
current issue, this jira contains other bug).  
  For PST timezone machine and query `select timestamp '1970-01-01 
00:00:00' from (VALUES(1))` the `0 millis` timestamp value is stored in 
DrillBuf and only on extracting stage the value is represented with server 
timezone. The same result for querying the timestamp data from any data source: 
'1970-01-01 00:00:00' will be stored as `0 millis` timestamp for any timezone. 
But only for hive functions the logic differs:
Let's consider `select to_utc_timestamp('1969-12-31 16:00:00','PST') from 
(VALUES(1))`. The result should be "1970-01-01 00:00:00" even with current 
Drill date/time logic. 
The output from hive's UDF is "java.sql.Timestamp" value (for PST timezone 
machine -- `2880 millis` internally, for UTC timezone machine -- `0 millis` 
internally). But in favour of above logic of 
[getObject()](https://github.com/apache/drill/blob/1c09c2f13bd0f50ca40c17dc0bfa7aae5826b8c3/exec/java-exec/src/main/codegen/templates/FixedValueVectors.java#L446)
 the `0 millis` timestamp value should be storing in the DrillBuf for any 
timezone.

In the result, with my fix the same timestamp values will be stored in 
DrillBuf for any timezone machine. 
Let me know if I am wrong with something.


> Using hive's date functions on top of date column gives wrong results for 
> local time-zone
> -
>
> Key: DRILL-5002
> URL: https://issues.apache.org/jira/browse/DRILL-5002
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Hive, Storage - Parquet
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Critical
> Attachments: 0_0_0.parquet
>
>
> git.commit.id.abbrev=190d5d4
> Wrong Result 1 :
> {code}
> select l_shipdate, `month`(l_shipdate) from cp.`tpch/lineitem.parquet` where 
> l_shipdate = date '1994-02-01' limit 2;
> +-+-+
> | l_shipdate  | EXPR$1  |
> +-+-+
> | 1994-02-01  | 1   |
> | 1994-02-01  | 1   |
> +-+-+
> {code}
> Wrong Result 2 : 
> {code}
> select l_shipdate, `day`(l_shipdate) from cp.`tpch/lineitem.parquet` where 
> l_shipdate = date '1998-06-02' limit 2;
> +-+-+
> | l_shipdate  | EXPR$1  |
> +-+-+
> | 1998-06-02  | 1   |
> | 1998-06-02  | 1   |
> +-+-+
> {code}
> Correct Result :
> {code}
> select l_shipdate, `month`(l_shipdate) from cp.`tpch/lineitem.parquet` where 
> l_shipdate = date '1998-06-02' limit 2;
> +-+-+
> | l_shipdate  | EXPR$1  |
> +-+-+
> | 1998-06-02  | 6   |
> | 1998-06-02  | 6   |
> +-+-+
> {code}
> It looks like we are getting wrong results when the 'day' is '01'. I only 
> tried month and day hive functionsbut wouldn't be surprised if they have 
> similar issues too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161727#comment-16161727
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138152163
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestMergeJoinAdvanced.java
 ---
@@ -102,10 +103,11 @@ public void testFix2967() throws Exception {
   test("select * from dfs_test.`%s/join/j1` j1 left outer join 
dfs_test.`%s/join/j2` j2 on (j1.c_varchar = j2.c_varchar)",
 TEST_RES_PATH, TEST_RES_PATH);
 } finally {
-  setSessionOption(PlannerSettings.BROADCAST.getOptionName(), 
String.valueOf(PlannerSettings.BROADCAST.getDefault().bool_val));
-  setSessionOption(PlannerSettings.HASHJOIN.getOptionName(), 
String.valueOf(PlannerSettings.HASHJOIN.getDefault().bool_val));
+  final OperatorFixture.TestOptionSet testOptionSet = new 
OperatorFixture.TestOptionSet();
+  setSessionOption(PlannerSettings.BROADCAST.getOptionName(), 
String.valueOf(testOptionSet.getDefault(PlannerSettings.BROADCAST.getOptionName()).bool_val));
--- End diff --

Done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161728#comment-16161728
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138152185
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/options/SystemOptionManager.java
 ---
@@ -346,17 +347,63 @@ public void deleteAllOptions(OptionType type) {
 }
   }
 
-  public void populateDefaultValues() {
-
+  public static CaseInsensitiveMap 
populateDefualtValues(Map definitions, DrillConfig 
bootConfig) {
--- End diff --

Done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161697#comment-16161697
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138148176
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestMergeJoinAdvanced.java
 ---
@@ -102,10 +103,11 @@ public void testFix2967() throws Exception {
   test("select * from dfs_test.`%s/join/j1` j1 left outer join 
dfs_test.`%s/join/j2` j2 on (j1.c_varchar = j2.c_varchar)",
 TEST_RES_PATH, TEST_RES_PATH);
 } finally {
-  setSessionOption(PlannerSettings.BROADCAST.getOptionName(), 
String.valueOf(PlannerSettings.BROADCAST.getDefault().bool_val));
-  setSessionOption(PlannerSettings.HASHJOIN.getOptionName(), 
String.valueOf(PlannerSettings.HASHJOIN.getDefault().bool_val));
+  final OperatorFixture.TestOptionSet testOptionSet = new 
OperatorFixture.TestOptionSet();
+  setSessionOption(PlannerSettings.BROADCAST.getOptionName(), 
String.valueOf(testOptionSet.getDefault(PlannerSettings.BROADCAST.getOptionName()).bool_val));
--- End diff --

The test changed some of the options for the session. Since the same 
drillbit cluster is used for multiple tests, the changes need to be undone at 
the end of the test to avoid impacting other tests, which reuse the drill bit 
cluster. I basically kept things as they were before, but looking at it again I 
agree this is a pretty messy way of doing it. I think this could be cleaned up 
by doing:

```
ALTER SESSION RESET ALL;
```

To clear all the changes at the end of the test. Instead of passing an 
option value I'll also add methods for each type: 

```
setSessionOption(final String option, final boolean value)
setSessionOption(final String option, final long value)
setSessionOption(final String option, final double value)
setSessionOption(final String option, final String value)
```


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161604#comment-16161604
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138133824
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/limit/TestLimitWithExchanges.java
 ---
@@ -71,7 +72,8 @@ public void testPushLimitPastUnionExchange() throws 
Exception {
   final String[] expectedPlan5 = 
{"(?s)Limit\\(fetch=\\[1\\].*UnionExchange.*Limit\\(fetch=\\[1\\]\\).*Join"};
   testLimitHelper(sql5, expectedPlan5, excludedPlan, 1);
 } finally {
-  test("alter session set `planner.slice_target` = " + 
ExecConstants.SLICE_TARGET_OPTION.getDefault().getValue());
+  final OperatorFixture.TestOptionSet testOptionSet = new 
OperatorFixture.TestOptionSet();
+  test("alter session set `planner.slice_target` = " + 
testOptionSet.getDefault(ExecConstants.SLICE_TARGET).getValue());
--- End diff --

done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161597#comment-16161597
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138133105
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/testing/TestExceptionInjection.java
 ---
@@ -216,79 +216,77 @@ public void injectionOnSpecificBit() {
 final ZookeeperHelper zkHelper = new ZookeeperHelper();
 zkHelper.startZookeeper(1);
 
-// Creating two drillbits
-final Drillbit drillbit1, drillbit2;
-final DrillConfig drillConfig = zkHelper.getConfig();
 try {
-  drillbit1 = Drillbit.start(drillConfig, remoteServiceSet);
-  drillbit2 = Drillbit.start(drillConfig, remoteServiceSet);
-} catch (DrillbitStartupException e) {
-  throw new RuntimeException("Failed to start drillbits.", e);
-}
+  // Creating two drillbits
+  final Drillbit drillbit1, drillbit2;
+  final DrillConfig drillConfig = zkHelper.getConfig();
+  try {
+drillbit1 = Drillbit.start(drillConfig, remoteServiceSet);
+drillbit2 = Drillbit.start(drillConfig, remoteServiceSet);
+  } catch (DrillbitStartupException e) {
+throw new RuntimeException("Failed to start drillbits.", e);
+  }
 
-final DrillbitContext drillbitContext1 = drillbit1.getContext();
-final DrillbitContext drillbitContext2 = drillbit2.getContext();
+  final DrillbitContext drillbitContext1 = drillbit1.getContext();
+  final DrillbitContext drillbitContext2 = drillbit2.getContext();
 
-final UserSession session = UserSession.Builder.newBuilder()
-
.withCredentials(UserBitShared.UserCredentials.newBuilder().setUserName("foo").build())
-.withUserProperties(UserProperties.getDefaultInstance())
-.withOptionManager(drillbitContext1.getOptionManager())
-.build();
+  final UserSession session = 
UserSession.Builder.newBuilder().withCredentials(UserBitShared.UserCredentials.newBuilder().setUserName("foo").build()).withUserProperties(UserProperties.getDefaultInstance()).withOptionManager(drillbitContext1.getOptionManager()).build();
--- End diff --

I'm a victim of autoformat. Fixed.


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161595#comment-16161595
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138133008
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/testing/TestPauseInjection.java
 ---
@@ -150,66 +150,61 @@ public void pauseOnSpecificBit() {
 final ZookeeperHelper zkHelper = new ZookeeperHelper();
 zkHelper.startZookeeper(1);
 
-// Creating two drillbits
-final Drillbit drillbit1, drillbit2;
-final DrillConfig drillConfig = zkHelper.getConfig();
 try {
-  drillbit1 = Drillbit.start(drillConfig, remoteServiceSet);
-  drillbit2 = Drillbit.start(drillConfig, remoteServiceSet);
-} catch (final DrillbitStartupException e) {
-  throw new RuntimeException("Failed to start two drillbits.", e);
-}
-
-final DrillbitContext drillbitContext1 = drillbit1.getContext();
-final DrillbitContext drillbitContext2 = drillbit2.getContext();
-
-final UserSession session = UserSession.Builder.newBuilder()
-  .withCredentials(UserCredentials.newBuilder()
-.setUserName("foo")
-.build())
-  .withUserProperties(UserProperties.getDefaultInstance())
-  .withOptionManager(drillbitContext1.getOptionManager())
-  .build();
-
-final DrillbitEndpoint drillbitEndpoint1 = 
drillbitContext1.getEndpoint();
-final String controls = Controls.newBuilder()
-  .addPauseOnBit(DummyClass.class, DummyClass.PAUSES, 
drillbitEndpoint1)
-  .build();
-
-ControlsInjectionUtil.setControls(session, controls);
-
-{
-  final long expectedDuration = 1000L;
-  final ExtendedLatch trigger = new ExtendedLatch(1);
-  final Pointer ex = new Pointer<>();
-  final QueryContext queryContext = new QueryContext(session, 
drillbitContext1, QueryId.getDefaultInstance());
-  (new ResumingThread(queryContext, trigger, ex, 
expectedDuration)).start();
-
-  // test that the pause happens
-  final DummyClass dummyClass = new DummyClass(queryContext, trigger);
-  final long actualDuration = dummyClass.pauses();
-  assertTrue(String.format("Test should stop for at least %d 
milliseconds.", expectedDuration),
-expectedDuration <= actualDuration);
-  assertTrue("No exception should be thrown.", ex.value == null);
+  // Creating two drillbits
+  final Drillbit drillbit1, drillbit2;
+  final DrillConfig drillConfig = zkHelper.getConfig();
   try {
-queryContext.close();
-  } catch (final Exception e) {
-fail("Failed to close query context: " + e);
+drillbit1 = Drillbit.start(drillConfig, remoteServiceSet);
+drillbit2 = Drillbit.start(drillConfig, remoteServiceSet);
+  } catch (final DrillbitStartupException e) {
+throw new RuntimeException("Failed to start two drillbits.", e);
   }
-}
 
-{
-  final ExtendedLatch trigger = new ExtendedLatch(1);
-  final QueryContext queryContext = new QueryContext(session, 
drillbitContext2, QueryId.getDefaultInstance());
+  final DrillbitContext drillbitContext1 = drillbit1.getContext();
+  final DrillbitContext drillbitContext2 = drillbit2.getContext();
+
+  final UserSession session = 
UserSession.Builder.newBuilder().withCredentials(UserCredentials.newBuilder().setUserName("foo").build()).withUserProperties(UserProperties.getDefaultInstance()).withOptionManager(drillbitContext1.getOptionManager()).build();
--- End diff --

done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been 

[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161592#comment-16161592
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138132892
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/server/TestOptions.java ---
@@ -56,7 +56,7 @@ public void checkChangedColumn() throws Exception {
 test("ALTER session SET `%s` = %d;", SLICE_TARGET,
   ExecConstants.SLICE_TARGET_DEFAULT);
 testBuilder()
-.sqlQuery("SELECT status FROM sys.options WHERE name = '%s' AND 
type = 'SESSION'", SLICE_TARGET)
+.sqlQuery("SELECT status FROM sys.options WHERE name = '%s' AND 
optionScope = 'SESSION'", SLICE_TARGET)
--- End diff --

Yeah we are in a tough spot because **type** was not well defined 
previously, so the tests implicitly assumed **type** represented where an 
option was set. Now that we have settled on a definition for **type**, which is 
the set of scopes where an option can be set, we have deviated from the meaning 
**type** was given in the tests. One possible way out of this situation is to 
change the definition of **type** and **optionScope** again by swapping their 
meanings:

* **type**: Would become where an option was set.
* **optionScope**: Would become the set of scopes where an option could be 
set.

This would minimize the changes required to the unit tests. It's hard to 
say how it would impact other user's scripts though because **type** was 
treated inconsistently in the code base, so I'm not sure how someone could have 
used the **type** information productively except to write unit tests, which 
verified incorrect behavior.

Long story short, I'll swap the definitions of **type** and 
**optionScope**. I think that would minimize the impact on unit tests and 
users, but we cannot provide any guarantees for users who depended on type when 
it was inconsistently defined.



> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5723) Support System/Session Internal Options And Additional Option System Fixes

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161594#comment-16161594
 ] 

ASF GitHub Bot commented on DRILL-5723:
---

Github user ilooner commented on a diff in the pull request:

https://github.com/apache/drill/pull/923#discussion_r138132972
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/test/RestClientFixture.java ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.test;
+
+import com.google.common.base.Preconditions;
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.server.rest.StatusResources;
+import org.glassfish.jersey.client.ClientConfig;
+import org.glassfish.jersey.client.JerseyClientBuilder;
+
+import javax.annotation.Nullable;
+import javax.ws.rs.client.Client;
+import javax.ws.rs.client.WebTarget;
+import javax.ws.rs.core.GenericType;
+import javax.ws.rs.core.MediaType;
+
+import java.util.List;
+
+/**
+ * Represents a client for the Drill Rest API.
+ */
+public class RestClientFixture implements AutoCloseable {
+  /**
+   * A builder for the rest client.
+   */
+  public static class Builder {
+private ClusterFixture cluster;
+
+public Builder(ClusterFixture cluster) {
+  this.cluster = Preconditions.checkNotNull(cluster);
+}
+
+public RestClientFixture build() {
+  return new RestClientFixture(cluster);
+}
+  }
+
+  private WebTarget baseTarget;
+  private Client client;
--- End diff --

done


> Support System/Session Internal Options And Additional Option System Fixes
> --
>
> Key: DRILL-5723
> URL: https://issues.apache.org/jira/browse/DRILL-5723
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>
> This is a feature proposed by [~ben-zvi].
> Currently all the options are accessible by the user in sys.options. We would 
> like to add internal options which can be altered, but are not visible in the 
> sys.options table. These internal options could be seen by another alias 
> select * from internal.options. The intention would be to put new options we 
> weren't comfortable with exposing to the end user in this table.
> After the options and their corresponding features are considered stable they 
> could be changed to appear in the sys.option table.
> A bunch of other fixes to the Option system have been clubbed into this:
> * OptionValidators no longer hold default values. Default values are 
> contained in the SystemOptionManager
> * Options have an OptionDefinition. The option definition includes:
>   * A validator
>   * Metadata about the options visibility, required permissions, and the 
> scope in which it can be set.
> * The Option Manager interface has been cleaned up so that a Type is not 
> required to be passed in in order to set and delete options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5761) Disable Lilith ClassicMultiplexSocketAppender by default

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161476#comment-16161476
 ] 

ASF GitHub Bot commented on DRILL-5761:
---

Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/930#discussion_r138103031
  
--- Diff: common/src/test/resources/logback-test.xml ---
@@ -0,0 +1,111 @@
+
+
+
+
+  
+
+  
+true
+1
+true
+${LILITH_HOSTNAME:-localhost}
+  
+
+  
+
+  
+
+
+  %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - 
%msg%n
+
+  
+
+  
+
+  
+
+
+  
+
+  
+
+
+  
+
+  
+
+
+  
+
+  
+
+  
+
+  
+
+  
+
--- End diff --

Thanks for pointing this, already removed.


> Disable Lilith ClassicMultiplexSocketAppender by default
> 
>
> Key: DRILL-5761
> URL: https://issues.apache.org/jira/browse/DRILL-5761
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> When running unit tests on the node where Hiveserver2 service is running, 
> tests run hangs in the middle. Jstack shows that some threads are waiting for 
> a condition.
> {noformat}
> Full thread dump
> "main" prio=10 tid=0x7f0998009800 nid=0x17f7 waiting on condition 
> [0x7f09a0c6d000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00076004ebf0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:324)
>   at 
> de.huxhorn.lilith.sender.MultiplexSendBytesService.sendBytes(MultiplexSendBytesService.java:132)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.sendBytes(MultiplexSocketAppenderBase.java:336)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.append(MultiplexSocketAppenderBase.java:348)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:272)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:259)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:441)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:395)
>   at ch.qos.logback.classic.Logger.error(Logger.java:558)
>   at 
> org.apache.drill.test.DrillTest$TestLogReporter.failed(DrillTest.java:153)
>   at org.junit.rules.TestWatcher.failedQuietly(TestWatcher.java:84)
>   at org.junit.rules.TestWatcher.access$300(TestWatcher.java:46)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:62)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runners.Suite.runChild(Suite.java:127)
>   at org.junit.runners.Suite.runChild(Suite.java:26)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at 

[jira] [Commented] (DRILL-5761) Disable Lilith ClassicMultiplexSocketAppender by default

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161478#comment-16161478
 ] 

ASF GitHub Bot commented on DRILL-5761:
---

Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/930#discussion_r138100617
  
--- Diff: common/src/test/resources/logback-test.xml ---
@@ -0,0 +1,111 @@
+
+
+
+
+  
+
+  
+true
+1
+true
+${LILITH_HOSTNAME:-localhost}
+  
+
+  
+
+  
+
+
+  %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - 
%msg%n
+
+  
+
+  
+
+  
+
--- End diff --

This logger contains only Lilith appender, and since we are using Lilith 
only for debugging separate tests, I think it would be better to left logging 
level here at `DEBUG`. In this case, everyone who uses Lilith won't need to 
change logback file. Console appender is used in the root logger and it has 
`ERROR` logging level.


> Disable Lilith ClassicMultiplexSocketAppender by default
> 
>
> Key: DRILL-5761
> URL: https://issues.apache.org/jira/browse/DRILL-5761
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> When running unit tests on the node where Hiveserver2 service is running, 
> tests run hangs in the middle. Jstack shows that some threads are waiting for 
> a condition.
> {noformat}
> Full thread dump
> "main" prio=10 tid=0x7f0998009800 nid=0x17f7 waiting on condition 
> [0x7f09a0c6d000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00076004ebf0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:324)
>   at 
> de.huxhorn.lilith.sender.MultiplexSendBytesService.sendBytes(MultiplexSendBytesService.java:132)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.sendBytes(MultiplexSocketAppenderBase.java:336)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.append(MultiplexSocketAppenderBase.java:348)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:272)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:259)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:441)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:395)
>   at ch.qos.logback.classic.Logger.error(Logger.java:558)
>   at 
> org.apache.drill.test.DrillTest$TestLogReporter.failed(DrillTest.java:153)
>   at org.junit.rules.TestWatcher.failedQuietly(TestWatcher.java:84)
>   at org.junit.rules.TestWatcher.access$300(TestWatcher.java:46)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:62)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runners.Suite.runChild(Suite.java:127)
>   at org.junit.runners.Suite.runChild(Suite.java:26)
>   

[jira] [Commented] (DRILL-5761) Disable Lilith ClassicMultiplexSocketAppender by default

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161477#comment-16161477
 ] 

ASF GitHub Bot commented on DRILL-5761:
---

Github user vvysotskyi commented on a diff in the pull request:

https://github.com/apache/drill/pull/930#discussion_r138102613
  
--- Diff: common/src/test/resources/logback-test.xml ---
@@ -0,0 +1,111 @@
+
+
+
+
+  
+
+  
+true
+1
+true
+${LILITH_HOSTNAME:-localhost}
+  
+
+  
+
+  
+
+
+  %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - 
%msg%n
+
+  
+
+  
+
+  
+
+
+  
+
+  
+
+
+  
+
+  
+
+
+  
+
+  
+
+  
+
+  
+
+  
+
+
+  
+
+  
+
+
+  
+
+  
--- End diff --

Initially, I assumed that it would be useful for those, who work with the 
mapr-format-plugin. I agree with you that we should delete this logger. 


> Disable Lilith ClassicMultiplexSocketAppender by default
> 
>
> Key: DRILL-5761
> URL: https://issues.apache.org/jira/browse/DRILL-5761
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> When running unit tests on the node where Hiveserver2 service is running, 
> tests run hangs in the middle. Jstack shows that some threads are waiting for 
> a condition.
> {noformat}
> Full thread dump
> "main" prio=10 tid=0x7f0998009800 nid=0x17f7 waiting on condition 
> [0x7f09a0c6d000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x00076004ebf0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:324)
>   at 
> de.huxhorn.lilith.sender.MultiplexSendBytesService.sendBytes(MultiplexSendBytesService.java:132)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.sendBytes(MultiplexSocketAppenderBase.java:336)
>   at 
> de.huxhorn.lilith.logback.appender.MultiplexSocketAppenderBase.append(MultiplexSocketAppenderBase.java:348)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:272)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:259)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:441)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:395)
>   at ch.qos.logback.classic.Logger.error(Logger.java:558)
>   at 
> org.apache.drill.test.DrillTest$TestLogReporter.failed(DrillTest.java:153)
>   at org.junit.rules.TestWatcher.failedQuietly(TestWatcher.java:84)
>   at org.junit.rules.TestWatcher.access$300(TestWatcher.java:46)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:62)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 

[jira] [Commented] (DRILL-5749) Foreman and Netty threads occure deadlock

2017-09-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161436#comment-16161436
 ] 

ASF GitHub Bot commented on DRILL-5749:
---

Github user weijietong commented on the issue:

https://github.com/apache/drill/pull/925
  
@paul-rogers have refactored the codes. @sudheeshkatkam nothing to fix 
,once the netty thread got the re-spawned RPC connection ,it will send out the 
message.


> Foreman and Netty threads occure deadlock 
> --
>
> Key: DRILL-5749
> URL: https://issues.apache.org/jira/browse/DRILL-5749
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.10.0, 1.11.0
>Reporter: weijie.tong
>Priority: Critical
>
> when the cluster was in high concurrency query and the reused control 
> connection occured exceptoin, the foreman and netty threads both try to 
> acquire each other's lock then deadlock occured.  The netty thread hold the 
> map (RequestIdMap) lock then try to acquire the ReconnectingConnection lock 
> to send command, while the foreman thread hold the ReconnectingConnection 
> lock then try to acquire the RequestIdMap lock. So the deadlock happend.
> Below is the jstack dump:
> Found one Java-level deadlock:
> =
> "265aa5cb-e5e2-39ed-9c2f-7658b905372e:foreman":
>   waiting to lock monitor 0x7f935b721f48 (object 0x000656affc40, a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager),
>   which is held by "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman"
> "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman":
>   waiting to lock monitor 0x7f90de3b9648 (object 0x0006b524d7e8, a 
> com.carrotsearch.hppc.IntObjectHashMap),
>   which is held by "BitServer-2"
> "BitServer-2":
>   waiting to lock monitor 0x7f935b721f48 (object 0x000656affc40, a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager),
>   which is held by "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman"
> Java stack information for the threads listed above:
> ===
> "265aa5cb-e5e2-39ed-9c2f-7658b905372e:foreman":
>   at 
> org.apache.drill.exec.rpc.ReconnectingConnection.runCommand(ReconnectingConnection.java:72)
>   - waiting to lock <0x000656affc40> (a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel.sendFragments(ControlTunnel.java:66)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.sendRemoteFragments(Foreman.java:1210)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.setupNonRootFragments(Foreman.java:1141)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:454)
>   at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1045)
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:274)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
>   at java.lang.Thread.run(Thread.java:849)
> "265aa82f-d8c1-5df0-9946-003a4990db7e:foreman":
>   at 
> org.apache.drill.exec.rpc.RequestIdMap.createNewRpcListener(RequestIdMap.java:87)
>   - waiting to lock <0x0006b524d7e8> (a 
> com.carrotsearch.hppc.IntObjectHashMap)
>   at 
> org.apache.drill.exec.rpc.AbstractRemoteConnection.createNewRpcListener(AbstractRemoteConnection.java:153)
>   at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:115)
>   at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:89)
>   at 
> org.apache.drill.exec.rpc.control.ControlConnection.send(ControlConnection.java:65)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel$SendFragment.doRpcCall(ControlTunnel.java:160)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel$SendFragment.doRpcCall(ControlTunnel.java:150)
>   at 
> org.apache.drill.exec.rpc.ListeningCommand.connectionAvailable(ListeningCommand.java:38)
>   at 
> org.apache.drill.exec.rpc.ReconnectingConnection.runCommand(ReconnectingConnection.java:75)
>   - locked <0x000656affc40> (a 
> org.apache.drill.exec.rpc.control.ControlConnectionManager)
>   at 
> org.apache.drill.exec.rpc.control.ControlTunnel.sendFragments(ControlTunnel.java:66)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.sendRemoteFragments(Foreman.java:1210)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.setupNonRootFragments(Foreman.java:1141)
>   at 
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:454)
>   at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1045)
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:274)
>   at 
> 

[jira] [Closed] (DRILL-5780) Apache Drill in embedded mode on Windows: Failure setting up ZK for client

2017-09-11 Thread Mayank Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Jain closed DRILL-5780.
--
Resolution: Fixed

I ran the command from Git Bash which removed the '=' from sqlline.bat command

> Apache Drill in embedded mode on Windows: Failure setting up ZK for client
> --
>
> Key: DRILL-5780
> URL: https://issues.apache.org/jira/browse/DRILL-5780
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Mayank Jain
>
> $ ./sqlline.bat -u "jdbc:drill:zk=local"
> DRILL_ARGS - " -u jdbc:drill:zk local"
> HADOOP_HOME not detected...
> HBASE_HOME not detected...
> Calculating Drill classpath...
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: F
> ailure setting up ZK for client. (state=,code=0)
> java.sql.SQLException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc
> .RpcException: Failure setting up ZK for client.
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:167)
> at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(Dril
> lJdbc41Factory.java:72)
> at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.ja
> va:69)
> at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDri
> ver.java:143)
> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
> at sqlline.Commands.connect(Commands.java:1083)
> at sqlline.Commands.connect(Commands.java:1015)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.jav
> a:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:742)
> at sqlline.SqlLine.initArgs(SqlLine.java:528)
> at sqlline.SqlLine.begin(SqlLine.java:596)
> at sqlline.SqlLine.start(SqlLine.java:375)
> at sqlline.SqlLine.main(SqlLine.java:268)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> cli
> ent.
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329
> )
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:158)
> ... 18 more
> Caused by: java.io.IOException: Failure to connect to the zookeeper cluster 
> serv
> ice within the allotted time of 1 milliseconds.
> at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCo
> ordinator.java:123)
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327
> )
> ... 19 more
> local (The system cannot find the file specified)
> apache drill 1.11.0
> "a drill in the hand is better than two in the bush"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (DRILL-5780) Apache Drill in embedded mode on Windows: Failure setting up ZK for client

2017-09-11 Thread Mayank Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161133#comment-16161133
 ] 

Mayank Jain edited comment on DRILL-5780 at 9/11/17 12:20 PM:
--

I followed instructions on this page:
https://drill.apache.org/docs/drill-in-10-minutes/

Any help would be highly appreciated



was (Author: j-mayank):
I followed instructions on this page:
https://drill.apache.org/docs/drill-in-10-minutes/


> Apache Drill in embedded mode on Windows: Failure setting up ZK for client
> --
>
> Key: DRILL-5780
> URL: https://issues.apache.org/jira/browse/DRILL-5780
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Mayank Jain
>
> $ ./sqlline.bat -u "jdbc:drill:zk=local"
> DRILL_ARGS - " -u jdbc:drill:zk local"
> HADOOP_HOME not detected...
> HBASE_HOME not detected...
> Calculating Drill classpath...
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: F
> ailure setting up ZK for client. (state=,code=0)
> java.sql.SQLException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc
> .RpcException: Failure setting up ZK for client.
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:167)
> at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(Dril
> lJdbc41Factory.java:72)
> at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.ja
> va:69)
> at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDri
> ver.java:143)
> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
> at sqlline.Commands.connect(Commands.java:1083)
> at sqlline.Commands.connect(Commands.java:1015)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.jav
> a:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:742)
> at sqlline.SqlLine.initArgs(SqlLine.java:528)
> at sqlline.SqlLine.begin(SqlLine.java:596)
> at sqlline.SqlLine.start(SqlLine.java:375)
> at sqlline.SqlLine.main(SqlLine.java:268)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> cli
> ent.
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329
> )
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:158)
> ... 18 more
> Caused by: java.io.IOException: Failure to connect to the zookeeper cluster 
> serv
> ice within the allotted time of 1 milliseconds.
> at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCo
> ordinator.java:123)
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327
> )
> ... 19 more
> local (The system cannot find the file specified)
> apache drill 1.11.0
> "a drill in the hand is better than two in the bush"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5780) Apache Drill in embedded mode on Windows: Failure setting up ZK for client

2017-09-11 Thread Mayank Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Jain updated DRILL-5780:
---
Summary: Apache Drill in embedded mode on Windows: Failure setting up ZK 
for client  (was: Apache Drill in embedded mode: Failure setting up ZK for 
client)

> Apache Drill in embedded mode on Windows: Failure setting up ZK for client
> --
>
> Key: DRILL-5780
> URL: https://issues.apache.org/jira/browse/DRILL-5780
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Mayank Jain
>
> $ ./sqlline.bat -u "jdbc:drill:zk=local"
> DRILL_ARGS - " -u jdbc:drill:zk local"
> HADOOP_HOME not detected...
> HBASE_HOME not detected...
> Calculating Drill classpath...
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: F
> ailure setting up ZK for client. (state=,code=0)
> java.sql.SQLException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc
> .RpcException: Failure setting up ZK for client.
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:167)
> at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(Dril
> lJdbc41Factory.java:72)
> at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.ja
> va:69)
> at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDri
> ver.java:143)
> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
> at sqlline.Commands.connect(Commands.java:1083)
> at sqlline.Commands.connect(Commands.java:1015)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.jav
> a:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:742)
> at sqlline.SqlLine.initArgs(SqlLine.java:528)
> at sqlline.SqlLine.begin(SqlLine.java:596)
> at sqlline.SqlLine.start(SqlLine.java:375)
> at sqlline.SqlLine.main(SqlLine.java:268)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> cli
> ent.
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329
> )
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:158)
> ... 18 more
> Caused by: java.io.IOException: Failure to connect to the zookeeper cluster 
> serv
> ice within the allotted time of 1 milliseconds.
> at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCo
> ordinator.java:123)
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327
> )
> ... 19 more
> local (The system cannot find the file specified)
> apache drill 1.11.0
> "a drill in the hand is better than two in the bush"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5780) Apache Drill in embedded mode: Failure setting up ZK for client

2017-09-11 Thread Mayank Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161133#comment-16161133
 ] 

Mayank Jain commented on DRILL-5780:


I followed instructions on this page:
https://drill.apache.org/docs/drill-in-10-minutes/


> Apache Drill in embedded mode: Failure setting up ZK for client
> ---
>
> Key: DRILL-5780
> URL: https://issues.apache.org/jira/browse/DRILL-5780
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Mayank Jain
>
> $ ./sqlline.bat -u "jdbc:drill:zk=local"
> DRILL_ARGS - " -u jdbc:drill:zk local"
> HADOOP_HOME not detected...
> HBASE_HOME not detected...
> Calculating Drill classpath...
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: F
> ailure setting up ZK for client. (state=,code=0)
> java.sql.SQLException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc
> .RpcException: Failure setting up ZK for client.
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:167)
> at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(Dril
> lJdbc41Factory.java:72)
> at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.ja
> va:69)
> at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDri
> ver.java:143)
> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
> at sqlline.Commands.connect(Commands.java:1083)
> at sqlline.Commands.connect(Commands.java:1015)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.jav
> a:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:742)
> at sqlline.SqlLine.initArgs(SqlLine.java:528)
> at sqlline.SqlLine.begin(SqlLine.java:596)
> at sqlline.SqlLine.start(SqlLine.java:375)
> at sqlline.SqlLine.main(SqlLine.java:268)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> cli
> ent.
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329
> )
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:158)
> ... 18 more
> Caused by: java.io.IOException: Failure to connect to the zookeeper cluster 
> serv
> ice within the allotted time of 1 milliseconds.
> at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCo
> ordinator.java:123)
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327
> )
> ... 19 more
> local (The system cannot find the file specified)
> apache drill 1.11.0
> "a drill in the hand is better than two in the bush"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5780) Apache Drill in embedded mode: Failure setting up ZK for client

2017-09-11 Thread Mayank Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Jain updated DRILL-5780:
---
Summary: Apache Drill in embedded mode: Failure setting up ZK for client  
(was: Apache Drill in embedded mode)

> Apache Drill in embedded mode: Failure setting up ZK for client
> ---
>
> Key: DRILL-5780
> URL: https://issues.apache.org/jira/browse/DRILL-5780
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Mayank Jain
>
> $ ./sqlline.bat -u "jdbc:drill:zk=local"
> DRILL_ARGS - " -u jdbc:drill:zk local"
> HADOOP_HOME not detected...
> HBASE_HOME not detected...
> Calculating Drill classpath...
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: F
> ailure setting up ZK for client. (state=,code=0)
> java.sql.SQLException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc
> .RpcException: Failure setting up ZK for client.
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:167)
> at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(Dril
> lJdbc41Factory.java:72)
> at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.ja
> va:69)
> at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDri
> ver.java:143)
> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
> at sqlline.Commands.connect(Commands.java:1083)
> at sqlline.Commands.connect(Commands.java:1015)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.jav
> a:36)
> at sqlline.SqlLine.dispatch(SqlLine.java:742)
> at sqlline.SqlLine.initArgs(SqlLine.java:528)
> at sqlline.SqlLine.begin(SqlLine.java:596)
> at sqlline.SqlLine.start(SqlLine.java:375)
> at sqlline.SqlLine.main(SqlLine.java:268)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> cli
> ent.
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329
> )
> at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
> Impl.java:158)
> ... 18 more
> Caused by: java.io.IOException: Failure to connect to the zookeeper cluster 
> serv
> ice within the allotted time of 1 milliseconds.
> at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCo
> ordinator.java:123)
> at 
> org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327
> )
> ... 19 more
> local (The system cannot find the file specified)
> apache drill 1.11.0
> "a drill in the hand is better than two in the bush"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (DRILL-5780) Apache Drill in embedded mode

2017-09-11 Thread Mayank Jain (JIRA)
Mayank Jain created DRILL-5780:
--

 Summary: Apache Drill in embedded mode
 Key: DRILL-5780
 URL: https://issues.apache.org/jira/browse/DRILL-5780
 Project: Apache Drill
  Issue Type: Task
Reporter: Mayank Jain


$ ./sqlline.bat -u "jdbc:drill:zk=local"
DRILL_ARGS - " -u jdbc:drill:zk local"
HADOOP_HOME not detected...
HBASE_HOME not detected...
Calculating Drill classpath...
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: F
ailure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc
.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
Impl.java:167)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(Dril
lJdbc41Factory.java:72)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.ja
va:69)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDri
ver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)

at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.jav
a:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for cli
ent.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:329
)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnection
Impl.java:158)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster serv
ice within the allotted time of 1 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCo
ordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:327
)
... 19 more
local (The system cannot find the file specified)
apache drill 1.11.0
"a drill in the hand is better than two in the bush"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5766) Stored XSS in APACHE DRILL

2017-09-11 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5766:

Labels: cross-site-scripting ready-to-commit security security-issue xss  
(was: cross-site-scripting security security-issue xss)

> Stored XSS in APACHE DRILL
> --
>
> Key: DRILL-5766
> URL: https://issues.apache.org/jira/browse/DRILL-5766
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0
> Environment: Apache drill installed in debian system
>Reporter: Sanjog Panda
>Assignee: Arina Ielchiieva
>Priority: Critical
>  Labels: cross-site-scripting, ready-to-commit, security, 
> security-issue, xss
> Fix For: 1.12.0
>
> Attachments: XSS - Sink.png, XSS - Source.png
>
>
> Hello Apache security team,
> I have been testing an application which internally uses the Apache drill 
> software v 1.6 as of now.
> I found XSS on profile page (sink) where in the user's malicious input comes 
> from the Query page (source) where you run a query. 
> Affected URL : https://localhost:8047/profiles 
> Once the user give the below payload and load the profile page, it gets 
> triggered and is stored.
> I have attached the screenshot of payload 
> alert(document.cookie).
> *[screenshot link]
> *
> https://drive.google.com/file/d/0B8giJ3591fvUbm5JZWtjUTg3WmEwYmJQeWd6dURuV0gzOVd3/view?usp=sharing
> https://drive.google.com/file/d/0B8giJ3591fvUV2lJRzZWOWRGNzN5S0JzdVlXSG1iNnVwRlAw/view?usp=sharing
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5766) Stored XSS in APACHE DRILL

2017-09-11 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5766:

Reviewer: Parth Chandra

> Stored XSS in APACHE DRILL
> --
>
> Key: DRILL-5766
> URL: https://issues.apache.org/jira/browse/DRILL-5766
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0
> Environment: Apache drill installed in debian system
>Reporter: Sanjog Panda
>Assignee: Arina Ielchiieva
>Priority: Critical
>  Labels: cross-site-scripting, ready-to-commit, security, 
> security-issue, xss
> Fix For: 1.12.0
>
> Attachments: XSS - Sink.png, XSS - Source.png
>
>
> Hello Apache security team,
> I have been testing an application which internally uses the Apache drill 
> software v 1.6 as of now.
> I found XSS on profile page (sink) where in the user's malicious input comes 
> from the Query page (source) where you run a query. 
> Affected URL : https://localhost:8047/profiles 
> Once the user give the below payload and load the profile page, it gets 
> triggered and is stored.
> I have attached the screenshot of payload 
> alert(document.cookie).
> *[screenshot link]
> *
> https://drive.google.com/file/d/0B8giJ3591fvUbm5JZWtjUTg3WmEwYmJQeWd6dURuV0gzOVd3/view?usp=sharing
> https://drive.google.com/file/d/0B8giJ3591fvUV2lJRzZWOWRGNzN5S0JzdVlXSG1iNnVwRlAw/view?usp=sharing
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)