[ 
https://issues.apache.org/jira/browse/DRILL-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra updated DRILL-2598:
---------------------------------
    Fix Version/s: 0.9.0

> Order by with limit on complex type throw IllegalStateException
> ---------------------------------------------------------------
>
>                 Key: DRILL-2598
>                 URL: https://issues.apache.org/jira/browse/DRILL-2598
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Data Types
>    Affects Versions: 0.8.0
>            Reporter: Chun Chang
>            Assignee: Hanifi Gunes
>            Priority: Blocker
>             Fix For: 0.9.0
>
>
> drill 0.8 release candidate:
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select * from sys.version;
> +------------+----------------+-------------+-------------+------------+
> | commit_id  | commit_message | commit_time | build_email | build_time |
> +------------+----------------+-------------+-------------+------------+
> | 462e50ce9c4b829c2a4bafdeb9763bfba677c726 | DRILL-2575: 
> FragmentExecutor.cancel() blasts through state transitions regardless of 
> current state | 25.03.2015 @ 21:11:23 PDT |
> {code}
> The following query involving limit and group by caused the 
> IllegalStateException:
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select t.id, t.soa from 
> `complex.json` t order by t.id limit 10;
> +------------+------------+
> |     id     |    soa     |
> +------------+------------+
> | 1          | 
> [{"in":1},{"in":1,"fl":1.12345},{"in":1,"fl":10.12345},{"in":1,"fl":10.6789,"bool":true,"str":"here
>  is a string at row 1"}] |
> | 2          | 
> [{"in":2},{"in":2,"fl":2.12345},{"in":2,"fl":20.12345},{"in":2,"fl":20.6789,"bool":true,"str":"here
>  is a string at row 2","nul":"not null"}] |
> | 3          | 
> [{"in":3},{"in":3,"fl":3.12345},{"in":3,"fl":30.12345,"nul":"not 
> null"},{"in":3,"fl":30.6789,"bool":true,"str":"here is a string at row 3"}] |
> | 4          | 
> [{"in":4},{"in":4,"fl":4.12345},{"in":4,"fl":40.12345,"nul":"not 
> null"},{"in":4,"fl":40.6789,"bool":false,"str":"here is a string at row 
> 4","nul":"not null"}] |
> | 5          | 
> [{"in":5},{"in":5,"fl":5.12345},{"in":5,"fl":50.12345,"nul":"not 
> null"},{"in":5,"fl":50.6789,"bool":false,"str":"here is a string at row 5"}] |
> | 6          | 
> [{"in":6},{"in":6,"fl":6.12345},{"in":6,"fl":60.12345,"nul":"not 
> null"},{"in":6,"fl":60.6789,"bool":false,"str":"here is a string at row 6"}] |
> | 7          | 
> [{"in":7},{"in":7,"fl":7.12345},{"in":7,"fl":70.12345,"nul":"not 
> null"},{"in":7,"fl":70.6789,"bool":false,"str":"here is a string at row 
> 7","nul":"not null"}] |
> | 8          | 
> [{"in":8},{"in":8,"fl":8.12345},{"in":8,"fl":80.12345,"nul":"not 
> null"},{"in":8,"fl":80.6789,"bool":true,"str":"here is a string at row 
> 8","nul":"not null"}] |
> | 9          | 
> [{"in":9},{"in":9,"fl":9.12345},{"in":9,"fl":90.12345,"nul":"not 
> null"},{"in":9,"fl":90.6789,"bool":true,"str":"here is a string at row 9"}] |
> | 10         | 
> [{"in":10},{"in":10,"fl":10.12345},{"in":10,"fl":100.12345,"nul":"not 
> null"},{"in":10,"fl":100.6789,"bool":false,"str":"here is a string at row 
> 10","nul":"not null"}] |
> Query failed: RemoteRpcException: Failure while running fragment., Attempted 
> to close accountor with 25 buffer(s) still allocatedfor QueryId: 
> 2aeb3baf-acc1-5615-4537-f215a47d4893, MajorFragmentId: 0, MinorFragmentId: 0.
>       Total 25 allocation(s) of byte size(s): 512, 512, 512, 512, 512, 512, 
> 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 
> 512, 512, 512, 512, at stack location:
>               
> org.apache.drill.exec.memory.TopLevelAllocator$ChildAllocator.buffer(TopLevelAllocator.java:231)
>               
> org.apache.drill.exec.vector.BitVector.allocateNewSafe(BitVector.java:95)
>               
> org.apache.drill.exec.vector.BitVector.allocateNew(BitVector.java:78)
>               
> org.apache.drill.exec.vector.NullableBitVector.allocateNew(NullableBitVector.java:168)
>               
> org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.<init>(RepeatedMapVector.java:282)
>               
> org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.<init>(RepeatedMapVector.java:260)
>               
> org.apache.drill.exec.vector.complex.RepeatedMapVector.getTransferPair(RepeatedMapVector.java:126)
>               
> org.apache.drill.exec.physical.impl.sort.RecordBatchData.<init>(RecordBatchData.java:57)
>               
> org.apache.drill.exec.physical.impl.TopN.TopNBatch.innerNext(TopNBatch.java:222)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
>               
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
>               
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>               
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:96)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
>               
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
>               
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>               
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:113)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
>               
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
>               
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>               
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:96)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
>               
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:99)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:89)
>               
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>               
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
>               
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
>               
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>               
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:68)
>               
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:96)
>               
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:58)
>               
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:163)
>               
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>               
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>               
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>               java.lang.Thread.run(Thread.java:744)
> {code}
> Physical plan:
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> explain plan for select t.id, 
> t.soa from `complex.json` t order by t.id limit 10;
> +------------+------------+
> |    text    |    json    |
> +------------+------------+
> | 00-00    Screen
> 00-01      Project(id=[$0], soa=[$1])
> 00-02        SelectionVectorRemover
> 00-03          Limit(fetch=[10])
> 00-04            SelectionVectorRemover
> 00-05              TopN(limit=[10])
> 00-06                Project(id=[$1], soa=[$0])
> 00-07                  Scan(groupscan=[EasyGroupScan 
> [selectionRoot=/drill/testdata/complex/json/complex.json, numFiles=1, 
> columns=[`id`, `soa`], 
> files=[maprfs:/drill/testdata/complex/json/complex.json/complex.json]]])
> {code}
> data can be downloaded from
> https://s3.amazonaws.com/apache-drill/files/complex100k.json.gz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to