Hey,
Thanks and sorry about that. I just found a bug this week where the new
code is over-allocating (though 30MB out of 10G limit seems odd?)
ie: with -I 2m, it would allocate 2 megabytes of memory and then only use
up to 1mb of it. A one-line fix for a missed variable conversion.
Will likely
I decided to give this a try on a production setup that has a very bimodal
size distribution (about a 50/50 split of 10k-100k values and 1M-10M
values) and lots of writes, where we've been running with "-I 10m -m 10240"
for a while. It didn't go so great. Almost immediately there were lots and