[ 
https://issues.apache.org/jira/browse/DRILL-800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14011485#comment-14011485
 ] 

Jacques Nadeau commented on DRILL-800:
--------------------------------------

Fixed in 6dd3ff9

> Partitioner is dropping records that can't fit in the available space of 
> ValueVectors in OutgoingRecordBatch
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: DRILL-800
>                 URL: https://issues.apache.org/jira/browse/DRILL-800
>             Project: Apache Drill
>          Issue Type: Bug
>            Reporter: Venki Korukanti
>            Assignee: Venki Korukanti
>         Attachments: DRILL-800-2.patch
>
>
> Partitioner code looks like:
> {code}
> public void partitionBatch(RecordBatch incoming) {
>   for (int recordId = 0; recordId < incoming.getRecordCount(); ++recordId) {
>       doEval(recordId, 0);
>     }
> }
> {code}
> In doEval
> {code}
> public void doEval(int inIndex, int outIndex) {
>    ....
>   if (!((NullableBigIntVector) outgoingVectors[(bucket)][ 
> 0]).copyFromSafe((inIndex), outgoingBatches[(bucket)].getRecordCount(), 
> vv35)) {
>     outgoingBatches[(bucket)].flush();
>     return ;
>   }
>   ....
>   outgoingBatches[(bucket)].incRecordCount();
>   outgoingBatches[(bucket)].flushIfNecessary();
> }
> {code}
> If the copyFromSafe returns false due to insufficient space, we flush the 
> existing records in outgoing batch and move on to the next record. The record 
> that can't fit is ignored.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to