On 11/05/11 19:19, guest01 wrote:
Hi,

I am currently using squid 3.1.12 as forward-proxy without
harddisk-caching (only RAM is used for caching). Each server is
running on RHEL5.5 and is pretty strong (16 CPUs, 28GB RAM), but each
servers starts swapping a few days after start. The workaround at the
moment is to reboot the server once a week, which I don't really like.
But swapping leads to serious side effects, e.g. performance troubles,
...

way too much swapping:
http://imageshack.us/m/52/6149/memoryday.png

I already read a lot of posts and mails for similar problems, but
unfortunately, I was not able to solve this problem. I added following
infos to my squid.conf-file:
# cache specific settings
cache_replacement_policy heap LFUDA
cache_mem 1600 MB
memory_replacement_policy heap LFUDA
maximum_object_size_in_memory 2048 KB
memory_pools off
cache_swap_low 85
cache_swap_high 90

(There are four squid instances per server, which means that 1600*4 =
6400MB RAM used for caching, which is not even 1/4 of the total
available amount of RAM. Plenty enough, don't you think?)

Not that is "for HTTP object caching", emphasis on *caching* and "HTTP object". In-transit objects and non-HTTP caches (Ip cache, domain name cache, persistent connections cache, client database, via/fwd database, network performance cache, auth caches, external ACL caches) and the indexes for all those caches use other memory.

Then again they should all be using no more than a few GB combined. So you may have hit a new leak (all the known ones are resolved before 3.1.12).


Very strange are the negative values (Memory usage for squid via
mallinfo():) from the output below. Maybe that is a reason for running
out of RAM?

mallinfo() sucks badly when going above 2GB of RAM. It can be ignored.

The section underneath it "Memory accounted for:" is Squids own accounting and more of a worry. It should not have had negatives since before 3.1.10.


HTTP/1.0 200 OK
Server: squid/3.1.12
Mime-Version: 1.0
Date: Wed, 11 May 2011 07:06:10 GMT
Content-Type: text/plain
Expires: Wed, 11 May 2011 07:06:10 GMT
Last-Modified: Wed, 11 May 2011 07:06:10 GMT
X-Cache: MISS from xlsqip03_1
Via: 1.0 xlsqip03_1 (squid/3.1.12)
Connection: close

Squid Object Cache: Version 3.1.12
Start Time:     Wed, 27 Apr 2011 11:01:13 GMT
Current Time:   Wed, 11 May 2011 07:06:10 GMT
Connection information for squid:
         Number of clients accessing cache:      1671
         Number of HTTP requests received:       16144359
         Number of ICP messages received:        0
         Number of ICP messages sent:    0
         Number of queued ICP replies:   0
         Number of HTCP messages received:       0
         Number of HTCP messages sent:   0
         Request failure ratio:   0.00
         Average HTTP requests per minute since start:   810.3
         Average ICP messages per minute since start:    0.0
         Select loop called: 656944758 times, 1.820 ms avg
Cache information for squid:
         Hits as % of all requests:      5min: 17.4%, 60min: 18.2%
         Hits as % of bytes sent:        5min: 45.6%, 60min: 39.9%
         Memory hits as % of hit requests:       5min: 86.1%, 60min: 88.9%
         Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
         Storage Swap size:      0 KB
         Storage Swap capacity:   0.0% used,  0.0% free
         Storage Mem size:       1622584 KB
         Storage Mem capacity:   100.0% used,  0.0% free

Okay 1.6 GB of RAM used for caching HTTP objects. Fully used.

         Mean Object Size:       0.00 KB

Problem #1. It *may* be Squid not accounting for the memory objects in the mean.

         Requests given to unlinkd:      0
Median Service Times (seconds)  5 min    60 min:
         HTTP Requests (All):   0.01648  0.01235
         Cache Misses:          0.05046  0.04277
         Cache Hits:            0.00091  0.00091
         Near Hits:             0.01469  0.01745
         Not-Modified Replies:  0.00000  0.00091
         DNS Lookups:           0.00190  0.00190
         ICP Queries:           0.00000  0.00000
Resource usage for squid:
         UP Time:        1195497.286 seconds
         CPU Time:       22472.507 seconds
         CPU Usage:      1.88%
         CPU Usage, 5 minute avg:        5.38%
         CPU Usage, 60 minute avg:       5.44%
         Process Data Segment Size via sbrk(): 3145032 KB
         Maximum Resident Size: 0 KB
         Page faults with physical i/o: 8634
Memory usage for squid via mallinfo():
         Total space in arena:  -1049140 KB
         Ordinary blocks:       -1277813 KB  87831 blks
         Small blocks:               0 KB      0 blks
         Holding blocks:          2240 KB      5 blks
         Free Small blocks:          0 KB
         Free Ordinary blocks:  228673 KB
         Total in use:          -1275574 KB 122%
         Total free:            228674 KB -22%
         Total size:            -1046900 KB
Memory accounted for:
         Total accounted:       -1375357 KB 131%
         memPool accounted:     2818947 KB -269%
         memPool unaccounted:   -3865847 KB 0%
         memPoolAlloc calls:       111
         memPoolFree calls:  8322084644
File descriptor usage for squid:
         Maximum number of file descriptors:   1024
         Largest file desc currently in use:    563
         Number of file desc currently in use:  472
         Files queued for open:                   0
         Available number of file descriptors:  552
         Reserved number of file descriptors:   100
         Store Disk files open:                   0
Internal Data Structures:
          96996 StoreEntries
          96996 StoreEntries with MemObjects
          96980 Hot Object Cache Items
              0 on-disk objects

Has anyone experienced similar things or does even know a solution?

Not since we fixed Squid's capacity to calculate storage used by >2GB objects.

Can you find the actual maximum and average object size for the cached objects in that Squid? (the mgr:vm_objects report should have all their details, "inmem_hi:" being the object size)

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1

Reply via email to