Em 07/03/2017 13:14, Alex Rousskov escreveu:
On 03/07/2017 08:40 AM, Heiler Bemerguy wrote:
I'm using squid 4.0.18
And noticed something: (iostat -x 5)
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0,00 0,00 0,00 0,25 0,00 28,00 224,00 0,00
8,00 0,00 8,00 8,00 0,20
sdc 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
0,00 0,00 0,00 0,00 0,00
sdb 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
0,00 0,00 0,00 0,00 0,00
sdd 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
0,00 0,00 0,00 0,00 0,00
No hds are being accessed, only the main (SDA) one (which logs are
saved). Btw squid is sending 80mbit/s to the network, as iftop told me.
cache.log:
2017/03/07 05:23:59 kid5| ERROR: worker I/O push queue for /cache4/rock
overflow: ipcIo5.206991w9
2017/03/07 05:24:10 kid5| WARNING: communication with /cache4/rock may be too
slow or disrupted for about 7.00s; rescued 304 out of 304 I/Os
2017/03/07 08:00:30 kid5| WARNING: abandoning 1 /cache2/rock I/Os after at
least 7.00s timeout
2017/03/07 10:50:45 kid5| WARNING: abandoning 1 /cache2/rock I/Os after at
least 7.00s timeout
I presume your iostat output covers 5 seconds. The cache.log output
spans 5 hours. Were there no cache disk traffic during those 5 hours? Do
those iostat 5 seconds match the timestamp of any single cache.log WARNING?
No. I used iostat to check if "right now" the hds were being accessed. A
lot of minutes passed and all writes/reads remained Zero. With a
80mbit/s traffic going on, how could nothing be written nor read from
disc? It's like squid stopped using the cache_dirs for some reason, then
I greped the cache.log for the word 'rock", and that's what it outputted
squid.conf:
cache_dir rock /cache2 110000 min-size=0 max-size=65536 max-swap-rate=200
swap-timeout=360
cache_dir rock /cache3 110000 min-size=65537 max-size=262144 max-swap-rate=200
swap-timeout=380
cache_dir rock /cache4 110000 min-size=262145 max-swap-rate=200 swap-timeout=500
Should I raise any values? tweak something?
Yes, but it is not yet clear what. If you suspect that your disks cannot
handle the load, decrease max-swap-rate. However, there is currently no
firm evidence that your disks cannot handle the load. It could be
something else like insufficient IPC RAM or Squid bugs.
How can I check IPC RAM? I've never tweaked it.
Any Squid kid crashes? How many Squid workers do you use?
Can you collect enough iostat 5-second outputs to correlate with
long-term cache.log messages? I would also collect other system activity
during those hours. The "atop" tool may be useful for collecting
everything in one place.
Couldn't find any crashes on the log file. 6 workers and 3 cache_dirs.
I've just noticed that squid was running since feb/18 (Start Time:
Sat, 18 Feb 2017 15:38:44 GMT) and since the beginning there were a lot
of warnings on cache.log.. (The logs I pasted on the earlier email was
from today's usage only..)
I think since then, it stopped using the cache stores..
2017/02/18 13:48:19 kid3| ERROR: worker I/O push queue for /cache4/rock
overflow: ipcIo3.9082w9
2017/02/18 13:48:42 kid4| ERROR: worker I/O push queue for /cache4/rock
overflow: ipcIo4.3371w9
2017/02/18 14:06:01 kid9| WARNING: /cache4/rock delays I/O requests for
9.97 seconds to obey 200/sec rate limit
2017/02/18 14:06:34 kid9| WARNING: /cache4/rock delays I/O requests for
21.82 seconds to obey 200/sec rate limit
2017/02/18 14:06:42 kid4| WARNING: abandoning 1 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:06:47 kid3| WARNING: abandoning 1 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:06:48 kid1| WARNING: abandoning 1 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:06:49 kid4| WARNING: abandoning 4 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:06:54 kid3| WARNING: abandoning 2 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:07:55 kid9| WARNING: /cache4/rock delays I/O requests for
68.64 seconds to obey 200/sec rate limit
2017/02/18 14:08:03 kid5| WARNING: abandoning 511 /cache4/rock I/Os
after at least 7.00s timeout
2017/02/18 14:08:47 kid2| WARNING: abandoning 20 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:08:51 kid3| WARNING: abandoning 41 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 14:08:54 kid1| WARNING: abandoning 41 /cache4/rock I/Os after
at least 7.00s timeout
2017/02/18 15:26:35 kid5| ERROR: worker I/O push queue for /cache4/rock
overflow: ipcIo5.31404w9
2017/02/18 15:29:00 kid9| WARNING: /cache4/rock delays I/O requests for
9.92 seconds to obey 200/sec rate limit
2017/02/18 15:29:13 kid9| WARNING: /cache4/rock delays I/O requests for
8.23 seconds to obey 200/sec rate limit
2017/02/18 15:29:45 kid9| WARNING: /cache4/rock delays I/O requests for
8.86 seconds to obey 200/sec rate limit
2017/02/18 15:30:06 kid9| WARNING: /cache4/rock delays I/O requests for
7.34 seconds to obey 200/sec rate limit
2017/02/18 15:30:27 kid9| WARNING: /cache4/rock delays I/O requests for
7.65 seconds to obey 200/sec rate limit
2017/02/18 15:30:48 kid9| WARNING: /cache4/rock delays I/O requests for
8.97 seconds to obey 200/sec rate limit
2017/02/18 15:31:09 kid9| WARNING: /cache4/rock delays I/O requests for
8.52 seconds to obey 200/sec rate limit
2017/02/18 15:31:22 kid9| WARNING: /cache4/rock delays I/O requests for
10.61 seconds to obey 200/sec rate limit
2017/02/18 17:19:40 kid9| WARNING: /cache4/rock delays I/O requests for
10.22 seconds to obey 200/sec rate limit
Cache information for squid:
Hits as % of all requests: 5min: 6.4%, 60min: 6.1%
Hits as % of bytes sent: 5min: -0.5%, 60min: 1.5%
Memory hits as % of hit requests: 5min: 33.4%, 60min: 34.1%
* Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%*
Store Disk files open: 3
Internal Data Structures:
117119 StoreEntries
117119 StoreEntries with MemObjects
280701 Hot Object Cache Items
2788823 on-disk objects
--
Best Regards,
Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751
_______________________________________________
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev