[squid-dev] Rock store stopped accessing discs

2017-03-07 Thread Heiler Bemerguy

I'm using squid 4.0.18

And noticed something: (iostat -x 5)

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s 
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 0,000,000,25 0,00 28,00   
224,00 0,008,000,008,00   8,00   0,20
sdc   0,00 0,000,000,00 0,00 0,00 
0,00 0,000,000,000,00   0,00   0,00
sdb   0,00 0,000,000,00 0,00 0,00 
0,00 0,000,000,000,00   0,00   0,00
sdd   0,00 0,000,000,00 0,00 0,00 
0,00 0,000,000,000,00   0,00   0,00


No hds are being accessed, only the main (SDA) one (which logs are 
saved). Btw squid is sending 80mbit/s to the network, as iftop told me.


cache.log:

2017/03/07 05:23:59 kid5| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo5.206991w9
2017/03/07 05:24:10 kid5| WARNING: communication with /cache4/rock may 
be too slow or disrupted for about 7.00s; rescued 304 out of 304 I/Os
2017/03/07 08:00:30 kid5| WARNING: abandoning 1 /cache2/rock I/Os after 
at least 7.00s timeout
2017/03/07 10:50:45 kid5| WARNING: abandoning 1 /cache2/rock I/Os after 
at least 7.00s timeout


squid.conf:

cache_dir rock /cache2 11 min-size=0 max-size=65536 
max-swap-rate=200 swap-timeout=360
cache_dir rock /cache3 11 min-size=65537 max-size=262144 
max-swap-rate=200 swap-timeout=380
cache_dir rock /cache4 11 min-size=262145 max-swap-rate=200 
swap-timeout=500


Should I raise any values? tweak something? please advise..

--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Rock store stopped accessing discs

2017-03-07 Thread Heiler Bemerguy


Em 07/03/2017 13:14, Alex Rousskov escreveu:

On 03/07/2017 08:40 AM, Heiler Bemerguy wrote:

I'm using squid 4.0.18

And noticed something: (iostat -x 5)

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 0,000,000,25 0,00 28,00 224,00 0,00  
  8,000,008,00   8,00   0,20
sdc   0,00 0,000,000,00 0,00 0,000,00 0,00  
  0,000,000,00   0,00   0,00
sdb   0,00 0,000,000,00 0,00 0,000,00 0,00  
  0,000,000,00   0,00   0,00
sdd   0,00 0,000,000,00 0,00 0,000,00 0,00  
  0,000,000,00   0,00   0,00

No hds are being accessed, only the main (SDA) one (which logs are
saved). Btw squid is sending 80mbit/s to the network, as iftop told me.

cache.log:

2017/03/07 05:23:59 kid5| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo5.206991w9
2017/03/07 05:24:10 kid5| WARNING: communication with /cache4/rock may be too 
slow or disrupted for about 7.00s; rescued 304 out of 304 I/Os
2017/03/07 08:00:30 kid5| WARNING: abandoning 1 /cache2/rock I/Os after at 
least 7.00s timeout
2017/03/07 10:50:45 kid5| WARNING: abandoning 1 /cache2/rock I/Os after at 
least 7.00s timeout

I presume your iostat output covers 5 seconds. The cache.log output
spans 5 hours. Were there no cache disk traffic during those 5 hours? Do
those iostat 5 seconds match the timestamp of any single cache.log WARNING?


No. I used iostat to check if "right now" the hds were being accessed. A 
lot of minutes passed and all writes/reads remained Zero. With a 
80mbit/s traffic going on, how could nothing be written nor read from 
disc? It's like squid stopped using the cache_dirs for some reason, then 
I greped the cache.log for the word 'rock", and that's what it outputted



squid.conf:

cache_dir rock /cache2 11 min-size=0 max-size=65536 max-swap-rate=200 
swap-timeout=360
cache_dir rock /cache3 11 min-size=65537 max-size=262144 max-swap-rate=200 
swap-timeout=380
cache_dir rock /cache4 11 min-size=262145 max-swap-rate=200 swap-timeout=500

Should I raise any values? tweak something?

Yes, but it is not yet clear what. If you suspect that your disks cannot
handle the load, decrease max-swap-rate. However, there is currently no
firm evidence that your disks cannot handle the load. It could be
something else like insufficient IPC RAM or Squid bugs.


How can I check IPC RAM? I've never tweaked it.


Any Squid kid crashes? How many Squid workers do you use?

Can you collect enough iostat 5-second outputs to correlate with
long-term cache.log messages? I would also collect other system activity
during those hours. The "atop" tool may be useful for collecting
everything in one place.

Couldn't find any crashes on the log file. 6 workers and 3 cache_dirs.
I've just noticed that squid was running since feb/18 (Start Time: 
Sat, 18 Feb 2017 15:38:44 GMT) and since the beginning there were a lot 
of warnings on cache.log.. (The logs I pasted on the earlier email was 
from today's usage only..)

I think since then, it stopped using the cache stores..

2017/02/18 13:48:19 kid3| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo3.9082w9
2017/02/18 13:48:42 kid4| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo4.3371w9
2017/02/18 14:06:01 kid9| WARNING: /cache4/rock delays I/O requests for 
9.97 seconds to obey 200/sec rate limit
2017/02/18 14:06:34 kid9| WARNING: /cache4/rock delays I/O requests for 
21.82 seconds to obey 200/sec rate limit
2017/02/18 14:06:42 kid4| WARNING: abandoning 1 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:06:47 kid3| WARNING: abandoning 1 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:06:48 kid1| WARNING: abandoning 1 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:06:49 kid4| WARNING: abandoning 4 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:06:54 kid3| WARNING: abandoning 2 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:07:55 kid9| WARNING: /cache4/rock delays I/O requests for 
68.64 seconds to obey 200/sec rate limit
2017/02/18 14:08:03 kid5| WARNING: abandoning 511 /cache4/rock I/Os 
after at least 7.00s timeout
2017/02/18 14:08:47 kid2| WARNING: abandoning 20 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:08:51 kid3| WARNING: abandoning 41 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 14:08:54 kid1| WARNING: abandoning 41 /cache4/rock I/Os after 
at least 7.00s timeout
2017/02/18 15:26:35 kid5| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo5.31404w9
2017/02/18 15:29:00 kid9| WARNING: /cache4/rock delays I/O requests for 
9.92 seconds to obey 200/sec rate limit
2017/02/18 15:29:13 kid9| WARNING: /cache4/rock delays I/O requests for 
8.23 seconds to o

Re: [squid-dev] Rock store stopped accessing discs

2017-03-07 Thread Heiler Bemerguy
ibrary/bootstrap/css/bootstrap.min.css
2017/03/07 15:53:05.302 kid6| 54,2| ../../../src/ipc/Queue.h(490) 
findOldest: peeking from 8 to 6 at 1
2017/03/07 15:53:05.302 kid6| 47,2| IpcIoFile.cc(415) canWait: cannot 
wait: 1136930958 oldest: ipcIo6.153009r8
2017/03/07 15:53:05.302 kid6| 20,2| store_io.cc(38) storeCreate: 
storeCreate: no swapdirs for e:=msw1DV/0x3a4d5870*2
2017/03/07 15:53:05.302 kid6| 20,3| store.cc(457) lock: storeUnregister 
locked key C8303E098834088226809BF615D12C51 e:=msDV/0x3a4d5870*3
2017/03/07 15:53:05.302 kid6| 90,3| store_client.cc(755) 
storePendingNClients: storePendingNClients: returning 0
2017/03/07 15:53:05.302 kid6| 20,3| store.cc(494) unlock: 
storeUnregister unlocking key C8303E098834088226809BF615D12C51 
e:=msDV/0x3a4d5870*3
2017/03/07 15:53:05.302 kid6| 20,3| store.cc(494) unlock: 
clientReplyContext::removeStoreReference unlocking key 
C8303E098834088226809BF615D12C51 e:=msDV/0x3a4d5870*2
2017/03/07 15:53:05.302 kid6| 33,3| Pipeline.cc(69) popMe: Pipeline 
0xc8a25a0 drop 0x1b8b2cd0*3

--
2017/03/07 15:53:05.318 kid6| 20,3| store_swapout.cc(377) 
mayStartSwapOut: already allowed
2017/03/07 15:53:05.318 kid6| 73,3| HttpRequest.cc(658) storeId: sent 
back effectiveRequestUrl: 
http://img.olx.com.br/images/31/310706022882854.jpg
2017/03/07 15:53:05.318 kid6| 20,3| store_swapmeta.cc(52) 
storeSwapMetaBuild: storeSwapMetaBuild URL: 
http://img.olx.com.br/images/31/310706022882854.jpg
2017/03/07 15:53:05.318 kid6| 54,2| ../../../src/ipc/Queue.h(490) 
findOldest: peeking from 7 to 6 at 1
2017/03/07 15:53:05.318 kid6| 47,2| IpcIoFile.cc(415) canWait: cannot 
wait: 1136930974 oldest: ipcIo6.381049w7
2017/03/07 15:53:05.318 kid6| 20,2| store_io.cc(38) storeCreate: 
storeCreate: no swapdirs for e:m381432=w1p2DV/0x3ace6740*4
2017/03/07 15:53:05.318 kid6| 90,3| store_client.cc(729) invokeHandlers: 
InvokeHandlers: 5E66A439D7F0C1ED9A6B0742518C59B6
2017/03/07 15:53:05.318 kid6| 90,3| store_client.cc(735) invokeHandlers: 
StoreEntry::InvokeHandlers: checking client #0
2017/03/07 15:53:05.318 kid6| 11,3| http.cc(1090) persistentConnStatus: 
local=10.1.10.9:59284 remote=104.113.39.233:80 FD 612 flags=1 eof=0
2017/03/07 15:53:05.318 kid6| 5,3| comm.cc(559) commSetConnTimeout: 
local=10.1.10.9:59284 remote=104.113.39.233:80 FD 612 flags=1 timeout 600
2017/03/07 15:53:05.318 kid6| 5,4| AsyncCallQueue.cc(55) fireNext: 
entering TunnelBlindCopyReadHandler(local=10.1.10.9:54832 
remote=201.57.89.205:443 FD 144 flags=1, data=0x2d04

83d8, size=4344, buf=0x207b41b0)
--
2017/03/07 15:53:05.793 kid6| 20,3| store.cc(1342) validLength: 
storeEntryValidLength: Checking '96501FC1CC75A4D3B9B6BA8A2D899B45'
2017/03/07 15:53:05.793 kid6| 19,4| MemObject.cc(431) isContiguous: 
MemObject::isContiguous: Returning true
2017/03/07 15:53:05.793 kid6| 20,3| store_swapmeta.cc(52) 
storeSwapMetaBuild: storeSwapMetaBuild URL: 
http://globoesporte.globo.com/busca/?q=gabriel+jesus&ps=on
2017/03/07 15:53:05.793 kid6| 54,2| ../../../src/ipc/Queue.h(490) 
findOldest: peeking from 7 to 6 at 1
2017/03/07 15:53:05.793 kid6| 47,2| IpcIoFile.cc(415) canWait: cannot 
wait: 1136931448 oldest: ipcIo6.381049w7
2017/03/07 15:53:05.793 kid6| 20,2| store_io.cc(38) storeCreate: 
storeCreate: no swapdirs for e:=sw1V/0x206e17d0*1
2017/03/07 15:53:05.793 kid6| 90,3| store_client.cc(729) invokeHandlers: 
InvokeHandlers: 96501FC1CC75A4D3B9B6BA8A2D899B45
2017/03/07 15:53:05.793 kid6| 20,3| store.cc(494) unlock: 
StoreEntry::forcePublicKey+Vary unlocking key 
96501FC1CC75A4D3B9B6BA8A2D899B45 e:=sV/0x206e17d0*1
2017/03/07 15:53:05.793 kid6| 90,3| store_client.cc(755) 
storePendingNClients: storePendingNClients: returning 0
2017/03/07 15:53:05.793 kid6| 20,3| store.cc(396) destroyStoreEntry: 
destroyStoreEntry: destroying 0x206e17d8
2017/03/07 15:53:05.793 kid6| 20,3| store.cc(378) destroyMemObject: 
destroyMemObject 0x354a3970



2. Reducing max-swap-rate from 200 to, say, 20. If your disks cannot
keep up _and_ there is a Squid bug that screws something up when your
disks cannot keep up, then this blind configuration change may avoid
triggering that bug.


I'll try to use this on a last resort attempt...


3. Collect enough iostat 5-second outputs (or equivalent) to correlate
system performance with cache.log messages. I would also collect other
system activity during those hours. The "atop" tool may be useful for
collecting everything in one place. You will probably want to restart
Squid for a clean experiment/collection.



Trying to figure out how to use atop information...

--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] my kids are dying :( Squid Cache: Version 4.0.18

2017-03-13 Thread Heiler Bemerguy


root@proxy:/var/log/squid# cat cache.log |grep assertion
2017/03/13 07:50:54 kid6| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 08:17:46 kid4| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 08:45:32 kid4| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 11:49:57 kid1| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 11:59:30 kid4| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 12:06:59 kid3| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 12:08:47 kid4| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 12:25:59 kid1| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"
2017/03/13 12:33:02 kid6| assertion failed: client_side_reply.cc:1167: 
"http->storeEntry()->objectLen() >= headers_sz"


Maybe this explains why I have a lot of "SIZE MISMATCH" messages while 
rebuilding rockstore cache?


--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Rock store stopped accessing discs

2017-03-14 Thread Heiler Bemerguy


Em 07/03/2017 20:26, Alex Rousskov escreveu:

These stuck disker responses probably explain why your disks do not
receive any traffic. It is potentially important that both disker
responses shown in your logs got stuck at approximately the same
absolute time ~13 days ago (around 2017-02-22, give or take a day;
subtract 1136930911 milliseconds from 15:53:05.255 in your Squid time
zone to know the "exact" time when those stuck requests were queued).

How can a disker response get stuck? Most likely, something unusual
happened ~13 days ago. This could be a Squid bug and/or a kid restart.

* Do all currently running Squid kid processes have about the same start
time? [1]

* Do you see ipcIo6.381049w7 or ipcIo6.153009r8 mentioned in any old
non-debugging messages/warnings?


I searched the log files from those days, nothing unusual, "grep" 
returns nothing for ipcIo6.381049w7 or ipcIo6.153009r8.


On that day I couldn't verify if the kids were still with the same 
uptime, I've reformatted those /cache2 /cache3 and /cache4 partitions 
and started fresh with squid -z, but looking at the PS right now, I feel 
I can answer that question:


root@proxy:~# ps auxw |grep squid-
proxy10225  0.0  0.0 13964224 21708 ?  SMar10   0:10 
(squid-coord-10) -s
proxy10226  0.1 12.5 14737524 8268056 ?SMar10   7:14 
(squid-disk-9) -s
proxy10227  0.0 11.6 14737524 7686564 ?SMar10   3:08 
(squid-disk-8) -s
proxy10228  0.1 14.9 14737540 9863652 ?SMar10   7:30 
(squid-disk-7) -s
proxy18348  3.5 10.3 17157560 6859904 ?SMar13  48:44 
(squid-6) -s
proxy18604  2.8  9.0 16903948 5977728 ?SMar13  37:28 
(squid-4) -s
proxy18637  1.7 10.8 16836872 7163392 ?RMar13  23:03 
(squid-1) -s
proxy20831 15.3 10.3 17226652 6838372 ?S08:50  39:51 
(squid-2) -s
proxy21189  5.3  2.8 16538064 1871788 ?S12:29   2:12 
(squid-5) -s
proxy21214  3.8  1.5 16448972 1012720 ?S12:43   1:03 
(squid-3) -s


Diskers aren't dying but workers are, a lot.. with that "assertion 
failed: client_side_reply.cc:1167: http->storeEntry()->objectLen() >= 
headers_sz" thing.


Viewing DF and IOSTAT, it seems right now /cache3 isn't being accessed 
anymore. (I think it is the disk-8 above, look at the CPU time usage..)


Another weird thing: lots of timeouts and overflows are happening on 
non-active hours.. From 0h to 7h we have like 1-2% of the clients we 
usually have from 8h to 17h.. (commercial time)


2017/03/14 00:26:50 kid3| WARNING: abandoning 23 /cache4/rock I/Os after 
at least 7.00s timeout
2017/03/14 00:26:53 kid1| WARNING: abandoning 1 /cache4/rock I/Os after 
at least 7.00s timeout
2017/03/14 02:14:48 kid5| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo5.68259w9
2017/03/14 06:33:43 kid3| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo3.55919w9
2017/03/14 06:57:53 kid3| ERROR: worker I/O push queue for /cache4/rock 
overflow: ipcIo3.58130w9


This cache4 partition is where huge files would be stored:
maximum_object_size 4 GB
cache_dir rock /cache2 11 min-size=0 max-size=65536 
max-swap-rate=150 swap-timeout=360
cache_dir rock /cache3 11 min-size=65537 max-size=262144 
max-swap-rate=150 swap-timeout=380
cache_dir rock /cache4 11 min-size=262145 max-swap-rate=150 
swap-timeout=500


Still don't know how /cache3 stopped and /cache4 is still active, even 
with all those warnings and errors.. :/


--
Atenciosamente / Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Rock store stopped accessing discs

2017-03-15 Thread Heiler Bemerguy


Hello list,

I've made a simple test here.

cache_dir rock /cache2 11 min-size=0 max-size=65536
#max-swap-rate=150 swap-timeout=360
cache_dir rock /cache3 11 min-size=65537 max-size=262144
#max-swap-rate=150 swap-timeout=380
cache_dir rock /cache4 11 min-size=262145
#max-swap-rate=150 swap-timeout=500

Commented out all the max-swap-rate and swap-timeout. Guess what? No 
more errors on cache.log..


before:
root@proxy:/var/log/squid# cat cache.log.2 |grep overflow |wc -l
53

now:
root@proxy:/var/log/squid# cat cache.log |grep overflow |wc -l
1

before:
root@proxy:/var/log/squid# cat cache.log.2 |grep "7.00s timeout" |wc -l
86

now:
root@proxy:/var/log/squid# cat cache.log |grep "7.00s timeout" |wc -l
0

It seems the code that "shapes" the rockstore I/Os is kinda buggy. 
Version 4.0.18


--
Atenciosamente / Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 07/03/2017 20:26, Alex Rousskov escreveu:

On 03/07/2017 01:08 PM, Heiler Bemerguy wrote:


Some log from right now...

Here is my analysis:


15:53:05.255| ipc/Queue.h findOldest: peeking from 7 to 6 at 1

Squid worker (kid6) is looking at the queue of disker (kid7) responses.
There is just one response in the queue.



IpcIoFile.cc canWait: cannot wait: 1136930911 oldest: ipcIo6.381049w7

Squid worker is trying to estimate whether it has enough time to queue
(and eventually perform) more disk I/O. The expected wait is
1'136'930'911 milliseconds or ~13 days. That is longer than the
configured cache_dir swap-timeout (a few hundred milliseconds) so Squid
refuses to queue this disk I/O request:


store_io.cc storeCreate: no swapdirs for e:=sw1p2RDV/0x206e17d0*4


The same story happens in your other log snippets AFAICT:


15:53:05.302| ipc/Queue.h findOldest: peeking from 8 to 6 at 1
IpcIoFile.cc canWait: cannot wait: 1136930958 oldest: ipcIo6.153009r8
store_io.cc storeCreate: no swapdirs for e:=msw1DV/0x3a4d5870*2

and


15:53:05.318| ipc/Queue.h findOldest: peeking from 7 to 6 at 1
IpcIoFile.cc canWait: cannot wait: 1136930974 oldest: ipcIo6.381049w7
store_io.cc storeCreate: no swapdirs for e:m381432=w1p2DV/0x3ace6740*4

and


15:53:05.793| ipc/Queue.h findOldest: peeking from 7 to 6 at 1
IpcIoFile.cc canWait: cannot wait: 1136931448 oldest: ipcIo6.381049w7
store_io.cc storeCreate: no swapdirs for e:=sw1V/0x206e17d0*1


Focusing on one disker (kid7), we can see that the oldest response does
not change: It is always ipcIo6.381049w7. This stuck response results in
gradual increment of the expected wait time with every check, matching
wall clock time increment:


15:53:05.255| cannot wait: 1136930911
15:53:05.318| cannot wait: 1136930974
15:53:05.793| cannot wait: 1136931448

These stuck disker responses probably explain why your disks do not
receive any traffic. It is potentially important that both disker
responses shown in your logs got stuck at approximately the same
absolute time ~13 days ago (around 2017-02-22, give or take a day;
subtract 1136930911 milliseconds from 15:53:05.255 in your Squid time
zone to know the "exact" time when those stuck requests were queued).

How can a disker response get stuck? Most likely, something unusual
happened ~13 days ago. This could be a Squid bug and/or a kid restart.

* Do all currently running Squid kid processes have about the same start
time? [1]

* Do you see ipcIo6.381049w7 or ipcIo6.153009r8 mentioned in any old
non-debugging messages/warnings?

[1]
http://stackoverflow.com/questions/5731234/how-to-get-the-start-time-of-a-long-running-linux-process


Thank you,

Alex.



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Rock store stopped accessing discs

2017-03-17 Thread Heiler Bemerguy
.com.br'
2017/03/15 18:50:07 kid1| ipcacheParse No Address records in response to 
'www.portaldoartesanato.com.br'
2017/03/15 18:50:07 kid1| ipcacheParse No Address records in response to 
'www.portaldoartesanato.com.br'
2017/03/15 18:50:07 kid1| ipcacheParse No Address records in response to 
'www.portaldoartesanato.com.br'
2017/03/15 19:01:29 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:15:05 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:18:02 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:22:27 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:26:19 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:43:52 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:44:25 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:44:54 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:48:25 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 19:51:03 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:02:08 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:02:22 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:03:13 kid1| WARNING: Closing client connection due to 
lifetime timeout
2017/03/15 20:03:13 kid1| 
http://static.bn-static.com/css-45023/desktop/page_home.css
2017/03/15 20:05:05 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:11:31 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:15:08 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:16:51 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:18:55 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:28:52 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:30:06 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 2 out of 2 I/Os
2017/03/15 20:31:21 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:33:32 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:33:51 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:42:33 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:43:33 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:47:54 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:49:29 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:50:29 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:51:28 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 2 out of 2 I/Os
2017/03/15 20:51:57 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:57:09 kid2| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os
2017/03/15 20:58:44 kid1| WARNING: communication with /cache2/rock may 
be too slow or disrupted for about 7.00s; rescued 1 out of 1 I/Os



--
Atenciosamente / Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev