On Tue, Feb 18, 2014 at 2:42 AM, Rajiv Desai <ra...@maginatics.com> wrote:
> Some more info:
>
> Following are mgr:storedir stats after back to back downloads for 4 GB
> data (ie same 2 GB twice).
> Perhaps the   477 StoreEntries with MemObjects AND 468 Hot Object
> Cache Items are not shared?

Nah ... those are just the static 53 error etc cached responses per child.
53*9 = 477


>
> <stats>
> Connection information for squid:
>
> Number of clients accessing cache: 10
>
> Number of HTTP requests received: 78410
>
> Number of ICP messages received: 0
>
> Number of ICP messages sent: 0
>
> Number of queued ICP replies: 0
>
> Number of HTCP messages received: 0
>
> Number of HTCP messages sent: 0
>
> Request failure ratio: 0.00
>
> Average HTTP requests per minute since start: 8167.3
>
> Average ICP messages per minute since start: 0.0
>
> Select loop called: 1228150 times, 4.226 ms avg
>
> Cache information for squid:
>
> Hits as % of all requests: 5min: 44.7%, 60min: 44.7%
>
> Hits as % of bytes sent: 5min: 44.9%, 60min: 44.9%
>
> Memory hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
>
> Disk hits as % of hit requests: 5min: 88.9%, 60min: 88.9%
>
> Storage Swap size: 2454128 KB
>
> Storage Swap capacity: 1.2% used, 98.8% free
>
> Storage Mem size: 1980 KB
>
> Storage Mem capacity: 0.0% used,  0.0% free
>
> Mean Object Size: 62.81 KB
>
> Requests given to unlinkd: 0
>
> Median Service Times (seconds)  5 min    60 min:
>
> HTTP Requests (All):   0.03394  0.03394
>
> Cache Misses:          0.04430  0.04430
>
> Cache Hits:            0.02041  0.02041
>
> Near Hits:             0.00000  0.00000
>
> Not-Modified Replies:  0.00000  0.00000
>
> DNS Lookups:           0.02896  0.02896
>
> ICP Queries:           0.00000  0.00000
>
> Resource usage for squid:
>
> UP Time: 576.032 seconds
>
> CPU Time: 476.166 seconds
>
> CPU Usage: 82.66%
>
> CPU Usage, 5 minute avg: 118.39%
>
> CPU Usage, 60 minute avg: 88.17%
>
> Process Data Segment Size via sbrk(): 81444 KB
>
> Maximum Resident Size: 4967328 KB
>
> Page faults with physical i/o: 2
>
> Memory usage for squid via mallinfo():
>
> Total space in arena:   82632 KB
>
> Ordinary blocks:        75092 KB   4345 blks
>
> Small blocks:               0 KB      0 blks
>
> Holding blocks:        350592 KB     73 blks
>
> Free Small blocks:          0 KB
>
> Free Ordinary blocks:    7540 KB
>
> Total in use:            7540 KB 2%
>
> Total free:              7540 KB 2%
>
> Total size:            433224 KB
>
> Memory accounted for:
>
> Total accounted:        27358 KB   6%
>
> memPool accounted:      27358 KB   6%
>
> memPool unaccounted:   405866 KB  94%
>
> memPoolAlloc calls:  17695559
>
> memPoolFree calls:   17764839
>
> File descriptor usage for squid:
>
> Maximum number of file descriptors:   589824
>
> Largest file desc currently in use:     35
>
> Number of file desc currently in use:  135
>
> Files queued for open:                   0
>
> Available number of file descriptors: 589689
>
> Reserved number of file descriptors:   900
>
> Store Disk files open:                   1
>
> Internal Data Structures:
>
>   477 StoreEntries
>
>   477 StoreEntries with MemObjects
>
>   468 Hot Object Cache Items
>
> 39070 on-disk objects
> </stats>
>
> On Tue, Feb 18, 2014 at 1:52 AM, Rajiv Desai <ra...@maginatics.com> wrote:
>> Hello,
>>
>> Need some guidance for optimal sharing of cache amongst SMP workers
>> using Large rock.
>>
>> Context:
>>
>> I am using squid cache with 8 SMP workers and a 200 GB rock cache stored on 
>> SMP.
>> (Using squid-3.HEAD-20140127-r13248 which has LargeRock support to
>> cache objects > 32 KB).
>>
>> I have set :
>> maximum_object_size 4 MB
>> cache_dir rock /mnt/squid-cache 204800 max-size=4194304
>> cache_mem 0 MB
>>
>> My refresh pattern is very permissive which basically allows caching 
>> everything:
>> refresh_pattern . 129600 100% 129600 ignore-auth
>>
>>
>> Questions:
>>
>> I am trying to test optimal caching behavior where after I have
>> downloaded 1 GB data (~16000 objects ~64KB each) the subsequent
>> download should all be cache hits.
>> However, I see several TCP misses in access.log. The hit ratio on
>> subsequent download of the same objects is ~85%.
>>
>> 1. Is this expected in a multiple worker setup? If yes, can you please
>> briefly explain what contributes towards these misses and how can this
>> be minimized?
>>
>> 2. I set cache_mem to 0 coz I believe cache_mem sharing is
>> opportunistic and hence all workers might not always share all objects
>> in cache_mem. Is that correct? What impact does cache_mem size have on
>> deterministic sharing of  cache between workers.
>>
>> 3. Moreover are there any tips/guidelines to optimize % hit rate for
>> previously downloaded objects.
>>
>> Thanks,
>> Rajiv

Reply via email to