Il 10/04/2013 17:22, Mr Dash Four ha scritto:


Marcello Romani wrote:
Il 10/04/2013 13:59, Mr Dash Four ha scritto:


Marcello Romani wrote:
Il 09/04/2013 19:33, Mr Dash Four ha scritto:
> [snip]
if the maximum_object_size_in_memory is reduced,
then I suppose squid's memory footprint will have to go down too,
which
makes the cache_mem option a bit useless.

I think will just store more objects in RAM.
I am sorry, but I don't understand that logic.

If I set cache_mem (which is supposed to be the limit of ram squid is
going to use for caching), then the maximum_object_size_in_memory should
be irrelevant. The *number* of objects to be placed in memory should
depend on cache_mem, not the other way around.

You're wrong.
Each object that squid puts into cache_mem can have a different size.
Thus the number of objects stored in cache_mem will vary over time
depending on the traffic and selection algorithms.
>>
I don't see how I am wrong in what I've posted above.

You wrote:
"if the maximum_object_size_in_memory is reduced,
then I suppose squid's memory footprint will have to go down too,
which makes the cache_mem option a bit useless."

(Perhaps you should've written: which *would make* the cache_mem option a bit useless.)

I haven't made real-life measurements to test how maximum_object_size_in_memory affects squid memory footprint, but my feeling is that lowering it woud *not* decrease memory usage. I would expect instead an *increase* in total memory consumption because more objects in cache_mem would mean more memory used for the indexes needed to manage them.

> I am not saying
that the number of objects placed in ram will be constant, all I am
saying is that the total memory used of all objects placed in ram should
not be 6 times the cache_mem value I've specified in my configuration
file - that is simply wrong, no matter how you twist it.

What currently seems to happen is that cache_mem is completely ignored
and squid is trying to shove up as many objects into my ram as possible,
to the point where nothing else on that machine is able to function
nominally. This is like putting cart in front of the horse - ridiculous!

As stated elsewhere, previous versions of squid had memory leaks. That
doesn't mean squid is _designed_ to put as many objects in ram as
possible.
Well, as I indicated previously, my cache_mem is 200MB. Current memory
usage of squid was 1.3GB - more than 6 times what I have indicated. That
is not a simple memory "leak" - that is one hell of a raging torrent if
you ask me!

Also, the cache_mem value must not be confused with a hard limit on
total squid memory usage (which AFAIK cannot be set). For example
there's also the memory used to manage the on-disk cache (10MB per GB
IIRC - google it for a reliable answer).
Even if we account for that, I don't see why squid should be occupying 6
times more memory from what I restricted it to use.

This is what the official squid wiki has to say about this ratio:

"rule of thumb: cache_mem is usually one third of the total memory consumption."

But you see... it's just a "rule of thumb". Squid uses additional memory to manage on-disk cache. Again, from the squid memory page:

"10 MB of memory per 1 GB on disk for 32-bit Squid
14 MB of memory per 1 GB on disk for 64-bit Squid"

So if you have a very large on-disk cache but specify a low cache_mem parameter, the 6:1 ratio can be easily exceeded.

Suppose you specify cache_mem 32MB and have a 40GB cache_dir.
That would give (at least) 32MB + 40GB / 1GB * 10MB = 432MB.
432 / 32 = 13.5

I'm not saying this would be a sensible configuration, nor denying there's an actual problem in your case. Plus, I'm not claiming I would be able to predict a squid instance memory usage (I prefer to graph that over time with munin). It's just that IMVHO you're barking at the wrong tree

:-)

--
Marcello Romani

Reply via email to