I think it is because we can't actually properly account for sliced
buffers. I don't remember for sure, but I think it might be because calling
buf.capacity() on a sliced buffer returns the the capacity of root buffer,
not the size of the slice. That may not be correct, but I think it was
something like that. Whatever it is, I am pretty sure it was giving wrong
results when they are sliced buffers.

I think we need to get the new allocator, along with proper transfer of
ownership in order to do this correctly. Then we can just query the
allocator rather than trying to track it separately.

On Fri, Nov 20, 2015 at 11:25 AM, Abdel Hakim Deneche <adene...@maprtech.com
> wrote:

> I'm looking at the external sort code and it uses the following method to
> compute the allocated size of a batch:
>
>   private long getBufferSize(VectorAccessible batch) {
> >     long size = 0;
> >     for (VectorWrapper<?> w : batch) {
> >       DrillBuf[] bufs = w.getValueVector().getBuffers(false);
> >       for (DrillBuf buf : bufs) {
> >         if (*buf.isRootBuffer()*) {
> >           size += buf.capacity();
> >         }
> >       }
> >     }
> >     return size;
> >   }
>
>
> This method only accounts for root buffers, but when we have a receiver
> below the sort, most of (if not all) buffers are child buffers. This may
> delay spilling, and increase the memory usage of the drillbit. If my
> computations are correct, for a single query, one drillbit can allocate up
> to 40GB without spilling once to disk.
>
> Is there a specific reason we only account for root buffers ?
>
> --
>
> Abdelhakim Deneche
>
> Software Engineer
>
>   <http://www.mapr.com/>
>
>
> Now Available - Free Hadoop On-Demand Training
> <
> http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available
> >
>

Reply via email to