Hi All,
Recently I've done some test about hypertable range server's memory
usage, on a single server with 16GB physical memory, with a local
broker. I'm using Hypertable 0.9.0.10 with a default hypertable.cfg
file. Here are the results:
=== Total Memory Usage ===
Fresh start:
Virtual Memory 346,680KB
RSS Memory 10,972KB
After loading 2GB data (with variable-length key/values):
Virtual Memory 841,684KB
RSS Memory 502,960KB
Heap Memory 496,088KB
After loading 4GB data:
Virtual Memory 1,014,088KB
RSS Memory 676,176KB
Heap Memory 662,468KB
After loading 8GB data:
Virtual Memory 1,797,644KB
RSS Memory 1,452,820KB
Heap Memory 1,446,024KB
With tcmalloc's HEAPPROFILE function, we can easily figure out
CellCache is the biggest heap user, in fact, the `new statement' in
the following code segment is executed millions of times:
int CellCache::add(const ByteString key, const ByteString value,
int64_t real_timestamp) {
[...]
(void)real_timestamp;
new_key.ptr = ptr = new uint8_t [total_len];
memcpy(ptr, key.ptr, key_len);
[...]
So I modified the code to let it print a piece of log whenever a new
or delete statement is executed. And collected the following results:
=== CellCache Memory Usage ==
After loading 2GB data:
new: 4426014 times
delete: 4341878 times
Actual Memory Usage: 46,728 KB
Memory Pages Used: 16051 = 64,204KB (27% fragments)
External Fragments (by comparing the page numbers and the /proc/pid/
maps file): 496,088KB - 64,204KB = 432,064KB (87% fragments)
After loading 4GB data:
new: 8910493 times
delete: 8432404 times
Actual Memory Usage: 226,354KB
Memory Pages Used: 69576 = 278,304KB (19% fragments)
External Fragments: 662,468KB - 278,304 KB = 384,164KB (58% fragments)
After loading 8GB data:
new: 17360036 times
delete: 17339789 times
Actual Memory Usage: 66,505 KB
We can see the memory usage of CellCache is actually very small, while
memory fragmentation is very serious, which lead to very bad memory
utilization (I have to say, even worse than Java).
If I compile with libc malloc instead of tcmalloc, the results are
even worse. I think we should reduce the use of dynamic memory
allocations, otherwise this situation could hardly change, even we
could find better memory allocation algorithms.
Donald
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Hypertable Development" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/hypertable-dev?hl=en
-~----------~----~----~----~------~----~------~--~---