Github user jplevyak commented on the pull request:
https://github.com/apache/trafficserver/pull/364#issuecomment-162120619
The upshot is that the two Ram caches use the correct amount of memory
(within 2%) and that LRU works better for identically sized objects (because
LRU is a very good proxy for hit rate and LRU has less memory overhead) and
CLFUS works better for variable/mixed size objects (which is expected since
that is what it is trying to do). Note that for large caches, the cost of
CLFUS for fixed size approaches zero as the overhead has less effect but the
benefit for variable size objects increases.
The regression tests at 1MB 16MB and 256MB, but with relatively small
objects (16KB) so the results should be applicable to more popular production
size RAM caches in the GB(s) range.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---