On Nov 15, 2007, at 16:30, Chris Bowditch wrote:


What I see as a possible issue though, is that there is a theoretical limit to rehash() having any effect whatsoever. If the cache grows to 64 buckets, then the maximum number of segments that exceed the threshold can never be greater than half the table-size... This might be a non-issue, as this would only be triggered if the cache's size is at least 2048 instances (not counting the elements in the buckets that don't exceed the threshold). No problem for enums, keeps. Strings and numbers, though?

2048 doesn't sound good enough as a maximum number of instances if Strings and integers are included. Why can't this number be increased by having more buckets and/or segments?

It's not so much a maximum number of instances, but more: the number of buckets will not grow beyond 64. If the cache were to grow more, the instances would still be divided over 64 buckets (which means a slightly longer time to retrieve entries) This is not due to the number of segments, but simply due to the naïve condition that triggers a rehash. I'll see if I can come up with a better check, and will repost the patch after that.


Later

Andreas

Reply via email to