Hi Illya,

Thanks for posting these numbers!  They're very interesting, especially the
information about jemalloc.

As I mentioned offline, if you can run a similar test that gives the query
cache a workout, that would be very interesting to know about.  TC malloc,
the memory allocator that we build Hypertable with, has a more difficult
time with usage patterns that allocate in one thread and deallocate in a
different thread, and the query cache is the place exhibits this usage
pattern the most.  The query cache entries get allocated by query threads
and deallocated by updates.  There is a static set of three threads that
run the update pipeline and one of them is responsible for doing query
cache invalidation.  Queries, on the other hand, get carried out by a pool
of worker threads.  So to simulate this problematic allocation pattern, be
sure your run multiple parallel query clients.

- Doug

On Mon, Nov 24, 2014 at 2:05 PM, Ilya Sorkin <[email protected]>
wrote:

> Hi,
>
> Recently we have been investigating the behavior of the Hypertable 0.9.3.8
> built with different memory allocators.
>
> The test that we used involved loading data from multiple data files into
> a set of HT tables. We were primarily interested in the effect different
> allocators have on memory utilization by various HT processes. Here are the
> results as observed in top:
>
> *glibc*
> VIRT  RES  SHR  S %CPU %MEM    TIME+  COMMAND
>
> 30.3g  24g 6368 S    0 38.6  35:59.52 Hypertable.Rang
>
> 8886m  12m 5736 S    0  0.0   0:03.28 Hypertable.Mast
>
> 1525m 7376 5908 S    0  0.0   0:00.08 ThriftBroker
>
> 2725m 5348 4084 S    0  0.0   0:01.62 Hyperspace.Mast
>
>
> *tcmalloc-minimal*
> VIRT  RES  SHR  S %CPU %MEM    TIME+  COMMAND
> 26.6g 25g  4492 S    0 40.1  33:05.92 Hypertable.Rang
>
> 1350m 13m  3744 S    0  0.0   0:02.47 Hypertable.Mast
>
> 385m  6224 3444 S    0  0.0   0:00.06 ThriftBroker
>
> 369m  5728 2964 S    0  0.0   0:01.19 Hyperspace.Mast
>
> *jemalloc*
> VIRT  RES  SHR  S %CPU %MEM    TIME+  COMMAND
>
> 32.2g 23g  6428 S    0 38.1  34:55.09 Hypertable.Rang
>
> 1595m  27m 5892 S    0  0.0   0:03.59 Hypertable.Mast
>
>  455m 7988 5996 S    0  0.0   0:00.08 ThriftBroker
>
>  515m 9896 4748 S    0  0.0   0:01.44 Hyperspace.Mast
>
> With jemalloc the resulting table is as it appeared right after the
> insertion rate has dropped to 0. Immediately after the snapshot was taken
> the CPU usage of Hypertable.Range has spiked to 90% and the reserved memory
> began to drop until I saw the following:
>
> VIRT  RES  SHR  S %CPU %MEM    TIME+  COMMAND
>
> 32.2g 1.5g 6428 S    0  2.3  35:55.57 Hypertable.Rang
>
>
> Furthermore, at about a halfway point through the run I observed the
> reserved memory drop from 20g to 14g and then increase again to 23g.
>
> Now this test mimicked out production environment - a single threaded
> process pegged to a single CPU. We are planning a more intricate test in
> the near future. I'd be happy to hear any insights that people may have
> regarding using the HT with different allocators.
>
> Ilya Sorkin
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypertable User" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/hypertable-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Doug Judd
CEO, Hypertable Inc.

-- 
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/hypertable-dev.
For more options, visit https://groups.google.com/d/optout.

Reply via email to