The client typically does a merge sort when combining results from
paralllel scans. Not sure if this would explain the CPU/memory usage you're
seeing. Can you narrow it down to a particular query that's causing the
issue?

Thanks,
James

On Thu, Aug 25, 2016 at 6:49 AM, John Leach <jle...@splicemachine.com>
wrote:

> Can anyone explain why the client would be burning so much CPU and memory
> if the result is a single row?
>
> I suspect we configured something wrong on Phoenix but we are having a
> hard time figuring it out.
>
> Thanks in advance.
>
> Regards,
> John
>
> > On Aug 24, 2016, at 9:54 AM, Aaron Molitor <amoli...@splicemachine.com>
> wrote:
> >
> > Seeing higher than expected CPU/MEM usage for the java process started
> by the sqlline.py client.
> >
> > From top:
> > top - 14:37:32 up 7 days, 22:15,  2 users,  load average: 25.52, 9.74,
> 7.89
> > Tasks: 509 total,   1 running, 508 sleeping,   0 stopped,   0 zombie
> > Cpu(s): 61.2%us,  6.3%sy,  0.0%ni, 31.0%id,  0.5%wa,  0.0%hi,  1.0%si,
> 0.0%st
> > Mem:  65920564k total, 31913580k used, 34006984k free,   647004k buffers
> > Swap: 33030140k total,        0k used, 33030140k free, 10464056k cached
> >
> >  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> > 8729 splice    20   0 29.9g  14g 889m S 1616.8 22.4  26:19.21 java
> >
> > Why is a the JDBC client using so much memory/cpu?  The expected result
> is only a single row.
> >
> > QUERY:
> > SELECT SUM(L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
> > FROM
> >  TPCH.LINEITEM,
> >  TPCH.PART
> > WHERE
> >  P_PARTKEY = L_PARTKEY
> >  AND P_BRAND = 'BRAND#23'
> >  AND P_CONTAINER = 'MED BOX'
> >  AND L_QUANTITY < (
> >    SELECT 0.2 * AVG(L_QUANTITY)
> >    FROM
> >      TPCH.LINEITEM
> >    WHERE
> >      L_PARTKEY = P_PARTKEY
> >  );
> >
> > ROW COUNTS:
> > TPCH.REGION             5
> > TPCH.NATION            25
> > TPCH.SUPPLIER     1000000
> > TPCH.PART        20000000
> > TPCH.PARTSUPP    80000000
> > TPCH.ORDERS     150000000
> > TPCH.CUSTOMER    15000000
> > TPCH.LINEITEM   600037902
> >
> > Thanks,
> > Aaron
>
>

Reply via email to