dormando wrote:
To be most accurate, it is "how many chunks will fit into the max item size, which by default is 1mb". The page size being == to the max item size is just due to how the slabbing algorithm works. It creates slab classes between a minimum and a maximum. So the maximum ends up being the item size limit. I can see this changing in the future, where we have a "max item size" of whatever, and a "page size" of 1mb, then larger items are made up of concatenated smaller pages or individual malloc's.
So what happens when a key is repeatedly written and it grows a bit each time? I had trouble with that long ago with a berkeleydb version that I think was eventually fixed. As things work now, if the new storage has to move to a larger block, is the old space immediately freed?
-- Les Mikesell lesmikes...@gmail.com