On Wed, 22 Sep 2010 15:09:31 -0400, "Chad Naugle"
<chad.nau...@travimp.com>
wrote:
> With that large array of RAM I would increase those maximum numbers, to
> let's say, 8 MB, 16 MB, 32 MB, especially if you plan on using heap
LFUDA,
> which is optimized for storing larger objects, and trashes smaller
objects
> faster, where heap GSDF is the opposite, using LRU for memory for the
large
> objects to offset the difference.
> 
> ---------------------------------------------
> Chad E. Naugle
> Tech Support II, x. 7981
> Travel Impressions, Ltd.
>  
> 
> 
>>>> Rajkumar Seenivasan <rkcp...@gmail.com> 9/22/2010 3:01 PM >>>
> Thanks for the tip. I will try with "heap GSDF" to see if it makes a
> difference.
> Any idea why the object is not considered as a hot-object and stored in
> memory?

see below.

> 
> I have...
> minimum_object_size 0 bytes
> maximum_object_size 5120 KB
> 
> maximum_object_size_in_memory 1024 KB
> 
> Earlier we had cache_swap_low and high at 80 and 85% and the physical
> memory usage went high leaving only 50MB free out of 15GB.
> To fix this issue, the high and low were set to 50 and 55%.

? 50% empty cache required so as not to fill RAM? => cache is too big or
RAM not enough.

> 
> Does this change in "cache_replacement_policy" and the "cache_swap_low
> / high" require a restart or just a -k reconfigure will do it?
> 
> Current usage: Top
> top - 14:33:39 up 12 days, 21:44,  3 users,  load average: 0.03, 0.03,
0.00
> Tasks:  83 total,   1 running,  81 sleeping,   1 stopped,   0 zombie
> Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si, 
> 0.6%st
> Mem:  15736360k total, 14175056k used,  1561304k free,   283140k buffers
> Swap: 25703960k total,       92k used, 25703868k free, 10692796k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 17442 squid     15   0 1821m 1.8g  14m S  0.3 11.7   4:03.23 squid
> 
> 
> #free
>              total       used       free     shared    buffers    
cached
> Mem:      15736360   14175164    1561196          0     283160  
10692864
> -/+ buffers/cache:    3199140   12537220
> Swap:     25703960         92   25703868
> 
> 
> Thanks.
> 
> 
> On Wed, Sep 22, 2010 at 2:16 PM, Chad Naugle <chad.nau...@travimp.com>
> wrote:
>> Perhaps you can try switching to heap GSDF, instead of heap LFUDA. 
What
>> are also your minimum_object_size versus your _maximum_object_size?
>>
>> Perhaps you can also try setting the cache_swap_low / high back to
>> default (90 - 95) to see if that will make a difference.
>>
>> ---------------------------------------------
>> Chad E. Naugle
>> Tech Support II, x. 7981
>> Travel Impressions, Ltd.
>>
>>
>>
>>>>> Rajkumar Seenivasan <rkcp...@gmail.com> 9/22/2010 2:05 PM >>>
>> I have the following for replacement policy...
>>
>> cache_replacement_policy heap LFUDA
>> memory_replacement_policy lru
>>
>> thanks.
>>
>> On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle <chad.nau...@travimp.com>
>> wrote:
>>> What is your cache_replacement_policy directive set to?
>>>
>>> ---------------------------------------------
>>> Chad E. Naugle
>>> Tech Support II, x. 7981
>>> Travel Impressions, Ltd.
>>>
>>>
>>>
>>>>>> Rajkumar Seenivasan <rkcp...@gmail.com> 9/22/2010 1:55 PM >>>
>>> I have a strange issue happening with my squid (v 3.1.8)
>>> 2 squid servers with sibling - sibling setup in accel mode.

What was the version in use before this happened? 3.1.8 okay for a while?
or did it start discarding right at the point of upgrade from another?

>>>
>>> after running the squid for 2 to 3 days, the HIT rate has gone down.
>>> from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.
>>>
>>> store.log shows that even fresh requests are NOT getting stored onto
>>> disk and getting RELEASED rightaway.
>>> This issue is with both squids...
>>>
>>> store.log entry:
>>> 1285176036.341 RELEASE -1 FFFFFFFF 7801460962DF9DCA15DE95562D3997CB
>>> 200 1285158415        -1 1285230415 application/x-download -1/279307
>>> GET http://....
>>> requests have a max-age of 20Hrs.

Server advertised the content-length as unknown then sent 279307 bytes.
(-1/279307) Squid is forced to store it to disk immediately (could be a TB
about to arrive for all Squid knows).

>>>
>>> squid.conf:
>>> cache_dir aufs /squid/var/cache 20480 16 256
>>> cache_mem 1536 MB
>>> memory_pools off
>>> cache_swap_low 50
>>> cache_swap_high 55

These tell squid 50% of the cache allocated disk space MUST be empty at
all times. Erase content if more is used. The defaults for these are less
than 100% in order to leave some small buffer of space for use by
line-speed stuff still arriving while squid purged old objects to fit them.

The 90%/95% numbers were created back when large HDD were measured MB.

50%/55% with 20GB cache only makes sense if you have something greater
than 250Mbps of new cachable HTTP data flowing through this one Squid
instance. In which case I'd suggest a bigger cache.

(My estimate of the bandwidth is calculated from: % of cache needed free /
5 minute interval lag in purging.)


>>> refresh_pattern . 0 20% 1440
>>>
>>>
>>> filesystem is resizerfs with RAID-0. only 11GB used for the cache.

Used or available?

cache_dir...20480 = 20GB allocated for the cache.

With 11GB is roughly 50% (cache_swap_low) of the 20GB. So that seems to be
working.


The 10MB/GB of RAM usage by the in-memory index is calculated from an
average object size around 4KB. You can check your available RAM roughly
meets Squid needs with:  10MB/GB of disk cache + the size of cache_mem +
10MB/GB of cache_mem + about 256 KB per number of concurrent clients at
peak traffic. This will give you a rough ceiling.

Amos

Reply via email to