On 15/01/18 18:53, Ivan Larionov wrote:
Hello!

After migrating squid from non-SMP/aufs to SMP/rock memory cache hit ratio dropped significantly. Like from 50-100% to 1-5%. And disk cache hit ratio went up from 15-50% to stable 60-65%. From the brief log file check it looks like in SMP/rock mode squid avoids using memory for small files like 1-3KB but uses it for 10KB+ files.

AIUI, SMP-mode rock operates as a fully separate process (a "Disker" kid) which delivers its results as objects already in shared memory to the worker process.

There should be little or no gain from that promotion process anymore - which would only be moving the object between memory locations. In fact if cache_mem were not operating as shared memory even with SMP active (which is possible) the promotion would be an actively bad idea as it prevents other workers using the object in future.

They show up as non- MEM_HIT because they are either REFRESH or stored in the Disker shared memory instead of the cache_mem shared memory. The Squid logging is not quite up to recording the slim distinction between which of multiple memory areas are being used.



I started tracking down the issue with disabling disk cache completely and it didn't change anything, I just started to get MISS every time for the URL which was getting MEM_HIT with an old configuration. Then I changed "workers 2" to "workers 1" and started getting memory hits as before.

So it seems like the issue is with shared memory:

When squid doesn't use shared memory it works as expected. Even with multiple workers.
When squid uses shared memory it caches very small amount of objects.

Am I doing anything wrong? Which debug options should I enable to provide more information if it seems like a bug?


Are you seeing an actual performance difference? if not I would not worry about it.

FYI: if you really want to track this down I suggest using Squid-4 to do that. Squid-3 is very near the end of its support lifetime and changes of a deep nature do not have much chance at all of getting in there now.

Amos
_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Reply via email to