Re: [squid-users] squid-users Digest, Vol 91, Issue 1

2022-03-01 Thread Frank Ruiz
Version is 5.2-1
https://salsa.debian.org/squid-team/squid/-/tree/debian/5.2-1

On Tue, Mar 1, 2022 at 4:00 AM 
wrote:

> Send squid-users mailing list submissions to
> squid-users@lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
> squid-users-requ...@lists.squid-cache.org
>
> You can reach the person managing the list at
> squid-users-ow...@lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>1. Squid 5 OOM (Frank Ruiz)
>2. Re: Squid 5 OOM (Adam Majer)
>
>
> ------
>
> Message: 1
> Date: Mon, 28 Feb 2022 08:42:08 -0800
> From: Frank Ruiz 
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Squid 5 OOM
> Message-ID:
>  nwgalpurzetcy2t5pjtsi+ybm...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Greetings,
>
> I am using squid 5, and I am trying to use it strictly as a proxy only with
> no caching.
>
> kernel: [7520199.557517] Out of memory: Killed process 17574
> (squid5) total-vm:15926268kB, anon-rss:15021268kB, file-rss:0kB,
> shmem-rss:72kB, UID:13 pgtables:31104kB oom_score_adj:0
>
> Is there a way to disable caching completely for my use case?
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.squid-cache.org/pipermail/squid-users/attachments/20220228/c786bafd/attachment-0001.htm
> >
>
> --
>
> Message: 2
> Date: Tue, 1 Mar 2022 10:45:19 +0100
> From: Adam Majer 
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Squid 5 OOM
> Message-ID: 
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> On 2/28/22 17:42, Frank Ruiz wrote:
> > Greetings,
> >
> > I am using squid 5, and I am trying to use it strictly as a proxy only
> > with no caching.
> >
> >  ?kernel: [7520199.557517] Out of memory: Killed process
> > 17574 (squid5) total-vm:15926268kB, anon-rss:15021268kB, file-rss:0kB,
> > shmem-rss:72kB, UID:13 pgtables:31104kB oom_score_adj:0
>
> This looks like a memory leak. It got killed at almost 16GB memory usage.
>
> Which exact version are you using?
>
> - Adam
>
>
> --
>
> Subject: Digest Footer
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> --
>
> End of squid-users Digest, Vol 91, Issue 1
> **
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5 OOM

2022-02-28 Thread Frank Ruiz
Greetings,

I am using squid 5, and I am trying to use it strictly as a proxy only with
no caching.

 ip-10-4-0-200 kernel: [7520199.557517] Out of memory: Killed process 17574
(squid5) total-vm:15926268kB, anon-rss:15021268kB, file-rss:0kB,
shmem-rss:72kB, UID:13 pgtables:31104kB oom_score_adj:0

Is there a way to disable caching completely for my use case?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] negative ttl and 404

2009-10-08 Thread Frank Ruiz
Is there a way to force a minimum age on 404 errors? We have a
situation where it appears 404s are getting cached, and squid is
honoring the TTLs of the webserver (could be hours/days). So the 404s
persist after the default negative ttl of 5 minutes.

Can we force this to 5 minutes regardless of the TTL that is passed by
the webserver?

Maybe an acl/refresh pattern or something? Didn't see anything for 404s.

Thanks!


[squid-users] 2 squid instances

2007-10-05 Thread Frank Ruiz
I have 2 squid instances.

Both are taking the same amount of connections, and both are
connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


After about 14 hours of runtime, the instances hit 25% utilization,
and then never seem to restabilize.


This is all I see in my cache.log for the instance that is maxed out:

2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
Invalid Request
2007/10/05 11:05:13| WARNING: unparseable HTTP header field
{Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
Invalid Request
2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:06:55| clientReadRequest: FD 1007 (12.25.108.29:63647)
Invalid Request
2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:22:01| clientReadRequest: FD 1612 (81.104.41.63:48329)
Invalid Request
2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:22:01| clientReadRequest: FD 1685 (74.236.38.154:50482)
Invalid Request
2007/10/05 11:22:06| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:22:06| clientReadRequest: FD 6278 (83.112.151.249:53849)
Invalid Request


The box is a 2 cpu dual core, so each squid instance maxes out at 25% cpu.
The are strictly in memory cache (no disk), and they each have 9G of
RAM per instance.

Can someone give me an idea of what is happening?

Thanks.


[squid-users] Re: 2 squid instances

2007-10-05 Thread Frank Ruiz
) = 0
fcntl(3176, F_GETFL)= 130
fcntl(3176, F_SETFD, 0x0083)= 0
fcntl(3176, F_GETFL)= 130
fcntl(3176, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3177
getsockname(3177, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3177, F_GETFL)= 130
fcntl(3177, F_SETFD, 0x0083)= 0
fcntl(3177, F_GETFL)= 130
fcntl(3177, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3179
getsockname(3179, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3179, F_GETFL)= 130
fcntl(3179, F_SETFD, 0x0083)= 0
fcntl(3179, F_GETFL)= 130
fcntl(3179, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3180
getsockname(3180, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3180, F_GETFL)= 130
fcntl(3180, F_SETFD, 0x0083)= 0
fcntl(3180, F_GETFL)= 130
fcntl(3180, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(9, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) Err#11 EAGAIN
read(5703,  G E T   / f i l e / 1 8.., 4095)  = 607
write(5704,  H T T P / 1 . 0   2 0 0.., 2416) = 2416
read(5704, 0x2F9114E50, 4095)   Err#11 EAGAIN
read(5706,  G E T   / f i l e / 1 6.., 4095)  = 382
write(5707,  H T T P / 1 . 0   2 0 0.., 2493) = 2493
read(5707, 0x29D770F60, 4095)   Err#11 EAGAIN
write(5709,  H T T P / 1 . 0   2 0 0.., 2681) = 2681
read(5709, 0x35857DA50, 4095)   Err#11 EAGAIN
write(5717,  G E T   / f i l e / 2 9.., 538)  = 538
write(5718,  G E T   / f i l e / 2 6.., 395)  = 395
write(5720,  H T T P / 1 . 0   2 0 0.., 2765) = 2765
read(5720, 0x35882EC00, 4095)   Err#11 EAGAIN
write(5721,  H T T P / 1 . 0   2 0 0.., 2993) = 2993
read(5721, 0x3577FA750, 4095)   Err#11 EAGAIN
write(5722,  H T T P / 1 . 0   2 0 0.., 2167) = 2167
read(5722, 0x246BF64B0, 4095)   Err#11 EAGAIN
read(5723, 0x358E66F80, 4095)   Err#131 ECONNRESET
close(5723) = 0
write(5724,  H T T P / 1 . 0   2 0 0.., 1989) = 1989
read(5724, 0x35E2A5B40, 4095)   Err#11 EAGAIN
write(5725,  H T T P / 1 . 0   2 0 0.., 2038) = 2038
close(5725) = 0
read(5727,  G E T   / f i l e / 3 3.., 4095)  = 382
write(5733,  H T T P / 1 . 0   2 0 0.., 2905) = 2905
read(5733, 0x357CCA630, 4095)   Err#11 EAGAIN
write(5734,  H T T P / 1 . 0   2 0 0.., 3072) = 3072
read(5734, 0x35C535630, 4095)   Err#11 EAGAIN
write(5735,  H T T P / 1 . 0   2 0 0.., 3366) = 3366


On 10/5/07, Frank Ruiz [EMAIL PROTECTED] wrote:
 I have 2 squid instances.

 Both are taking the same amount of connections, and both are
 connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


 After about 14 hours of runtime, the instances hit 25% utilization,
 and then never seem to restabilize.


 This is all I see in my cache.log for the instance that is maxed out:

 2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
 Invalid Request
 2007/10/05 11:05:13| WARNING: unparseable HTTP header field
 {Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
 Invalid Request
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1007 (12.25.108.29:63647)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1612 (81.104.41.63:48329)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1685 (74.236.38.154:50482)
 Invalid Request
 2007/10/05 11:22:06| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:06| clientReadRequest: FD 6278 (83.112.151.249:53849)
 Invalid Request


 The box is a 2 cpu dual core, so each squid instance maxes out at 25% cpu.
 The are strictly in memory cache (no disk), and they each have 9G of
 RAM per instance.

 Can someone give me an idea of what is happening?

 Thanks.



[squid-users] Re: 2 squid instances

2007-10-05 Thread Frank Ruiz
Here are my compile time options:

./configure --enable-storeio=diskd,null --enable-snmp --enable-devpoll


On 10/5/07, Frank Ruiz [EMAIL PROTECTED] wrote:
 I have 2 squid instances.

 Both are taking the same amount of connections, and both are
 connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


 After about 14 hours of runtime, the instances hit 25% utilization,
 and then never seem to restabilize.


 This is all I see in my cache.log for the instance that is maxed out:

 2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
 Invalid Request
 2007/10/05 11:05:13| WARNING: unparseable HTTP header field
 {Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
 Invalid Request
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1007 (12.25.108.29:63647)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1612 (81.104.41.63:48329)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1685 (74.236.38.154:50482)
 Invalid Request
 2007/10/05 11:22:06| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:06| clientReadRequest: FD 6278 (83.112.151.249:53849)
 Invalid Request


 The box is a 2 cpu dual core, so each squid instance maxes out at 25% cpu.
 The are strictly in memory cache (no disk), and they each have 9G of
 RAM per instance.

 Can someone give me an idea of what is happening?

 Thanks.



[squid-users] Re: 2 squid instances

2007-10-05 Thread Frank Ruiz
Also,

I have 32G additional RAM avail, and I swap looks healthy:
0 0 0 41870868 33463280 0 0 0  0  0  0  0  0  0  0  0 7381 13369 1557 36 6 58
 0 0 0 41870868 33463280 0 0 0  0  0  0  0  2  0  0  0 7503 15598 1498 36 8 57
 0 0 0 41870868 33463280 0 0 0  0  0  0  0  0  0  0  0 8071 15602 1662 35 7 58
 0 0 0 41870868 33463280 0 0 0  0  0  0  0  3  0  0  0 8121 16271 1487 36 12 53
 0 0 0 41870868 33463280 0 0 0  0  0  0  0  0  0  0  0 8154 15720 1545 36 7 57
 0 0 0 41870868 33463280 0 0 0  0  0  0  0  0  0  0  0 8322 15764 1491 35 8 56
 0 0 0 41870856 33463268 0 0 0  0  0  0  0  0  0  0  0 8386 19134 1863 34 8 58


On 10/5/07, Frank Ruiz [EMAIL PROTECTED] wrote:
 I have 2 squid instances.

 Both are taking the same amount of connections, and both are
 connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


 After about 14 hours of runtime, the instances hit 25% utilization,
 and then never seem to restabilize.


 This is all I see in my cache.log for the instance that is maxed out:

 2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
 Invalid Request
 2007/10/05 11:05:13| WARNING: unparseable HTTP header field
 {Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
 Invalid Request
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1007 (12.25.108.29:63647)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1612 (81.104.41.63:48329)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1685 (74.236.38.154:50482)
 Invalid Request
 2007/10/05 11:22:06| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:06| clientReadRequest: FD 6278 (83.112.151.249:53849)
 Invalid Request


 The box is a 2 cpu dual core, so each squid instance maxes out at 25% cpu.
 The are strictly in memory cache (no disk), and they each have 9G of
 RAM per instance.

 Can someone give me an idea of what is happening?

 Thanks.



[squid-users] 2 squid instances

2007-10-05 Thread Frank Ruiz
)= 0
fcntl(3158, F_GETFL)= 130
fcntl(3158, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3161
getsockname(3161, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3161, F_GETFL)= 130
fcntl(3161, F_SETFD, 0x0083)= 0
fcntl(3161, F_GETFL)= 130
fcntl(3161, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3169
getsockname(3169, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3169, F_GETFL)= 130
fcntl(3169, F_SETFD, 0x0083)= 0
fcntl(3169, F_GETFL)= 130
fcntl(3169, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3170
getsockname(3170, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3170, F_GETFL)= 130
fcntl(3170, F_SETFD, 0x0083)= 0
fcntl(3170, F_GETFL)= 130
fcntl(3170, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3175
getsockname(3175, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3175, F_GETFL)= 130
fcntl(3175, F_SETFD, 0x0083)= 0
fcntl(3175, F_GETFL)= 130
fcntl(3175, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3176
getsockname(3176, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3176, F_GETFL)= 130
fcntl(3176, F_SETFD, 0x0083)= 0
fcntl(3176, F_GETFL)= 130
fcntl(3176, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3177
getsockname(3177, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3177, F_GETFL)= 130
fcntl(3177, F_SETFD, 0x0083)= 0
fcntl(3177, F_GETFL)= 130
fcntl(3177, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3179
getsockname(3179, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3179, F_GETFL)= 130
fcntl(3179, F_SETFD, 0x0083)= 0
fcntl(3179, F_GETFL)= 130
fcntl(3179, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3180
getsockname(3180, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3180, F_GETFL)= 130
fcntl(3180, F_SETFD, 0x0083)= 0
fcntl(3180, F_GETFL)= 130
fcntl(3180, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(9, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) Err#11 EAGAIN
read(5703,  G E T   / f i l e / 1 8.., 4095)  = 607
write(5704,  H T T P / 1 . 0   2 0 0.., 2416) = 2416
read(5704, 0x2F9114E50, 4095)   Err#11 EAGAIN
read(5706,  G E T   / f i l e / 1 6.., 4095)  = 382
write(5707,  H T T P / 1 . 0   2 0 0.., 2493) = 2493
read(5707, 0x29D770F60, 4095)   Err#11 EAGAIN
write(5709,  H T T P / 1 . 0   2 0 0.., 2681) = 2681
read(5709, 0x35857DA50, 4095)   Err#11 EAGAIN
write(5717,  G E T   / f i l e / 2 9.., 538)  = 538
write(5718,  G E T   / f i l e / 2 6.., 395)  = 395
write(5720,  H T T P / 1 . 0   2 0 0.., 2765) = 2765
read(5720, 0x35882EC00, 4095)   Err#11 EAGAIN
write(5721,  H T T P / 1 . 0   2 0 0.., 2993) = 2993
read(5721, 0x3577FA750, 4095)   Err#11 EAGAIN
write(5722,  H T T P / 1 . 0   2 0 0.., 2167) = 2167
read(5722, 0x246BF64B0, 4095)   Err#11 EAGAIN
read(5723, 0x358E66F80, 4095)   Err#131 ECONNRESET
close(5723) = 0
write(5724,  H T T P / 1 . 0   2 0 0.., 1989) = 1989
read(5724, 0x35E2A5B40, 4095)   Err#11 EAGAIN
write(5725,  H T T P / 1 . 0   2 0 0.., 2038) = 2038
close(5725) = 0
read(5727,  G E T   / f i l e / 3 3.., 4095)  = 382
write(5733,  H T T P / 1 . 0   2 0 0.., 2905) = 2905
read(5733, 0x357CCA630, 4095)   Err#11 EAGAIN
write(5734,  H T T P / 1 . 0   2 0 0.., 3072) = 3072
read(5734, 0x35C535630, 4095)   Err#11 EAGAIN
write(5735,  H T T P / 1 . 0   2 0 0.., 3366) = 3366



On 10/5/07, Frank Ruiz [EMAIL PROTECTED] wrote:
 I have 2 squid instances.

 Both are taking the same amount of connections, and both are
 connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


 After about 14 hours of runtime

[squid-users] 2 instances

2007-10-05 Thread Frank Ruiz
)= 130
fcntl(3179, F_SETFD, 0x0083)= 0
fcntl(3179, F_GETFL)= 130
fcntl(3179, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(8, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 3180
getsockname(3180, 0xFD7FFFDFFAE0, 0xFD7FFFDFFADC, SOV_DEFAULT) = 0
fcntl(3180, F_GETFL)= 130
fcntl(3180, F_SETFD, 0x0083)= 0
fcntl(3180, F_GETFL)= 130
fcntl(3180, F_SETFL, FWRITE|FNONBLOCK)  = 0
accept(9, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) Err#11 EAGAIN
read(5703,  G E T   / f i l e / 1 8.., 4095)  = 607
write(5704,  H T T P / 1 . 0   2 0 0.., 2416) = 2416
read(5704, 0x2F9114E50, 4095)   Err#11 EAGAIN
read(5706,  G E T   / f i l e / 1 6.., 4095)  = 382
write(5707,  H T T P / 1 . 0   2 0 0.., 2493) = 2493
read(5707, 0x29D770F60, 4095)   Err#11 EAGAIN
write(5709,  H T T P / 1 . 0   2 0 0.., 2681) = 2681
read(5709, 0x35857DA50, 4095)   Err#11 EAGAIN
write(5717,  G E T   / f i l e / 2 9.., 538)  = 538
write(5718,  G E T   / f i l e / 2 6.., 395)  = 395
write(5720,  H T T P / 1 . 0   2 0 0.., 2765) = 2765
read(5720, 0x35882EC00, 4095)   Err#11 EAGAIN
write(5721,  H T T P / 1 . 0   2 0 0.., 2993) = 2993
read(5721, 0x3577FA750, 4095)   Err#11 EAGAIN
write(5722,  H T T P / 1 . 0   2 0 0.., 2167) = 2167
read(5722, 0x246BF64B0, 4095)   Err#11 EAGAIN
read(5723, 0x358E66F80, 4095)   Err#131 ECONNRESET
close(5723) = 0
write(5724,  H T T P / 1 . 0   2 0 0.., 1989) = 1989
read(5724, 0x35E2A5B40, 4095)   Err#11 EAGAIN
write(5725,  H T T P / 1 . 0   2 0 0.., 2038) = 2038
close(5725) = 0
read(5727,  G E T   / f i l e / 3 3.., 4095)  = 382
write(5733,  H T T P / 1 . 0   2 0 0.., 2905) = 2905
read(5733, 0x357CCA630, 4095)   Err#11 EAGAIN
write(5734,  H T T P / 1 . 0   2 0 0.., 3072) = 3072
read(5734, 0x35C535630, 4095)   Err#11 EAGAIN
write(5735,  H T T P / 1 . 0   2 0 0.., 3366) = 3366



On 10/5/07, Frank Ruiz [EMAIL PROTECTED] wrote:
 I have 2 squid instances.

 Both are taking the same amount of connections, and both are
 connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


 After about 14 hours of runtime, the instances hit 25% utilization,
 and then never seem to restabilize.


 This is all I see in my cache.log for the instance that is maxed out:

 2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
 Invalid Request
 2007/10/05 11:05:13| WARNING: unparseable HTTP header field
 {Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
 Invalid Request
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1007 (12.25.108.29:63647)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1612 (81.104.41.63:48329)
 Invalid Request
 2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:01| clientReadRequest: FD 1685 (74.236.38.154:50482)
 Invalid Request
 2007/10/05 11:22:06| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:22:06| clientReadRequest: FD 6278 (83.112.151.249:53849)
 Invalid Request


 The box is a 2 cpu dual core, so each squid instance maxes out at 25% cpu.
 The are strictly in memory cache (no disk), and they each have 9G of
 RAM per instance.

 Can someone give me an idea of what is happening?

 Thanks.




Here are my compile time options:

./configure --enable-storeio=diskd,null --enable-snmp --enable-devpoll



On 10/5/07, Frank Ruiz [EMAIL PROTECTED] wrote:

- Show quoted text -
 I have 2 squid instances.

 Both are taking the same amount of connections, and both are
 connecting to the same exact pool of origins via a lb.

  2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
  2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


 After about 14 hours of runtime, the instances hit 25% utilization,
 and then never seem to restabilize.


 This is all I see in my cache.log for the instance that is maxed out:

 2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
 Invalid Request
 2007/10/05 11:05:13| WARNING: unparseable HTTP header field
 {Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
 2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
 2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
 Invalid Request
 2007/10/05 11:06:55| parseHttpRequest

[squid-users] Fwd: 2 instances - Fixed Issue

2007-10-05 Thread Frank Ruiz
Greetings,

Hopefully, I can save someone a few hours of heartache.

Summary:
If you plan to use squid on solaris 10 x86, and configure to use
/dev/poll, make sure its patched.

Thanks everyone for reading the spam below if you did.



-- Forwarded message --
From: Frank Ruiz [EMAIL PROTECTED]
Date: Oct 5, 2007 12:18 PM
Subject: 2 instances
To: squid-users squid-users@squid-cache.org


Also,

Just killed all connections to squid process. CPU still pegged at 15%
and decreasing

Here is what the process is doing:

ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 1
accept(8, 0xFD7FFFDFFB90, 0xFD7FFFDFFB7C, SOV_DEFAULT) = 12
getsockname(12, 0xFD7FFFDFFB80, 0xFD7FFFDFFB7C, SOV_DEFAULT) = 0
fcntl(12, F_GETFL)  = 130
fcntl(12, F_SETFD, 0x0083)  = 0
fcntl(12, F_GETFL)  = 130
fcntl(12, F_SETFL, FWRITE|FNONBLOCK)= 0
accept(8, 0xFD7FFFDFFB90, 0xFD7FFFDFFB7C, SOV_DEFAULT) Err#11 EAGAIN
write(3, \f\0\0\0\0\b\0\0\f\0\0\0.., 16)  = 16
ioctl(3, DP_POLL, 0x005036B0)   = 1
read(12,  G E T   / f i l e / 1 1.., 4095)= 51
write(3, \f\0\0\0\0\b\0\0\f\0\0\0.., 32)  = 32
ioctl(3, DP_POLL, 0x005036B0)   = 1
write(12,  H T T P / 1 . 0   2 0 0.., 1736)   = 1736
close(12)   = 0
write(3, \f\0\0\0\0\b\0\0, 8) = 8
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 0
ioctl(3, DP_POLL, 0x005036B0)   = 1



I have 2 squid instances.

Both are taking the same amount of connections, and both are
connecting to the same exact pool of origins via a lb.

 2344 root   13G   13G cpu1 00   4:03:10  25% squid/1
 2096 root   13G   13G sleep   310   4:47:22 9.2% squid/1


After about 14 hours of runtime, the instances hit 25% utilization,
and then never seem to restabilize.


This is all I see in my cache.log for the instance that is maxed out:

2007/10/05 10:47:25| clientReadRequest: FD 107 (82.38.189.46:5430)
Invalid Request
2007/10/05 11:05:13| WARNING: unparseable HTTP header field
{Accept-CharsetGET /pict/320155568274_1.jpg HTTP/1.1}
2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:06:55| clientReadRequest: FD 1808 (84.71.71.234:35312)
Invalid Request
2007/10/05 11:06:55| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:06:55| clientReadRequest: FD 1007 (12.25.108.29:63647)
Invalid Request
2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:22:01| clientReadRequest: FD 1612 (81.104.41.63:48329)
Invalid Request
2007/10/05 11:22:01| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:22:01| clientReadRequest: FD 1685 (74.236.38.154:50482)
Invalid Request
2007/10/05 11:22:06| parseHttpRequest: Unsupported method 'Connection:'
2007/10/05 11:22:06| clientReadRequest: FD 6278 (83.112.151.249:53849)
Invalid Request


The box is a 2 cpu dual core, so each squid instance maxes out at 25% cpu.
The are strictly in memory cache (no disk), and they each have 9G of
RAM per instance.

Can someone give me an idea of what is happening?

Thanks.
**

Greetings,

Also, here is some truss output to show what the process is doing.

accept(9, 0xFD7FFFDFFAF0, 0xFD7FFFDFFADC, SOV_DEFAULT) Err#11 EAGAIN
write(5635,  G E T   / f i l e / 2 7.., 842)  = 842
write(5636,  G E T   / f i l e / 2 5.., 654)  = 654
write(5637,  G E T   / f i l e / 2 3.., 475)  = 475
write(5638,  H T T P / 1

[squid-users] squid-2.6.STABLE16 and xcalloc: Unable to allocate 1 blocks of 4108 bytes!

2007-10-03 Thread Frank Ruiz
Greetings,

I just upgraded to stable 16. My squid dies with the following error:


Oct  3 09:42:49 server01 squid[12275]: [ID 702911 daemon.alert]
xcalloc: Unable to allocate 1 blocks of 4108 bytes!

It does this consistently when I hit around 3GB of cache usage.

I have plenty of RAM (32GB), plenty of swap.


$ id
uid=60001(nobody) gid=60001(nobody)
$ ulimit -aH
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 65536
memory(kbytes) unlimited


I don't see any apparent ulimit limitations. The same exact system
with no OS changes was able to use 27GB of cache mem using
SQUID-2.6.STABLE14

I upgraded due to tcp probing issues, and monitor issues.

This system is solaris 10 x86 btw.

Anyone else got ideas, or happen to run into this issue??? Thanks in advance.


[squid-users] tcp timeout issue

2007-10-02 Thread Frank Ruiz
Greetings,

I patched squid2.6 stable 14 with the tcp probe patch.

It patched two files:

cache_cf.c
neighbors.c

However, After about 14 hours of good runtime, my response times,
began to suck, and began to see errors again indicative of the tcp
probe issue:

2007/10/02 01:57:15| Detected REVIVED Parent: 10.10.10.20
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| Detected DEAD Parent: 10.10.10.20
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed


The origin server is available, however I keep getting
revivied/connectionfailed/dead

It seems that the only way to recover from this is a restart.

I am running solaris 10, and I had to download the gnu patch utility
in order to patch the src.

Here was the patch applied.

Index: src/cache_cf.c
===
RCS file: /cvsroot/squid/squid/src/cache_cf.c,v
retrieving revision 1.470
diff -u -p -r1.470 cache_cf.c
--- src/cache_cf.c  20 Jul 2007 21:08:47 -  1.470
+++ src/cache_cf.c  28 Aug 2007 23:46:47 -
@@ -1621,6 +1621,7 @@ parse_peer(peer ** head)
 p-stats.logged_state = PEER_ALIVE;
 p-monitor.state = PEER_ALIVE;
 p-monitor.interval = 300;
+p-tcp_up = PEER_TCP_MAGIC_COUNT;
 if ((token = strtok(NULL, w_space)) == NULL)
self_destruct();
 p-host = xstrdup(token);
Index: src/neighbors.c
===
RCS file: /cvsroot/squid/squid/src/neighbors.c,v
retrieving revision 1.318
diff -u -p -r1.318 neighbors.c
--- src/neighbors.c 20 Jul 2007 21:08:47 -  1.318
+++ src/neighbors.c 28 Aug 2007 23:46:47 -
@@ -1010,12 +1010,13 @@ peerDNSConfigure(const ipcache_addrs * i
debug(0, 0) (WARNING: No IP address found for '%s'!\n, p-host);
return;
 }
-p-tcp_up = PEER_TCP_MAGIC_COUNT;
 for (j = 0; j  (int) ia-count  j  PEER_MAX_ADDRESSES; j++) {
p-addresses[j] = ia-in_addrs[j];
debug(15, 2) (-- IP address #%d: %s\n, j, inet_ntoa(p-addresses[j]))
;
p-n_addresses++;
 }
+if (!p-tcp_up)
+   peerProbeConnect((peer *) p);
 ap = p-in_addr;
 memset(ap, '\0', sizeof(struct sockaddr_in));
 ap-sin_family = AF_INET;

Any ideas is much appreciated. Any special debug info you need, please
let me know.

Also, as I side note, I have monitorurl set as well

cache_peer 10.10.10.20 parent 80 0 no-query no-digest originserver
monitorinterval=30 monitorurl=http://10.10.10.20/test.jpg

Thank you!


[squid-users] Re: tcp timeout issue

2007-10-02 Thread Frank Ruiz
Also,

Here is what was patched based on a diff performed:

server01# diff neighbors.c neighbors.c~
1016a1017
 p-tcp_up = PEER_TCP_MAGIC_COUNT;
1022,1023d1022
 if (!p-tcp_up)
   peerProbeConnect((peer *) p);
server01# diff cache_cf.c cache_cf.c~
1629d1628
 p-tcp_up = PEER_TCP_MAGIC_COUNT;
server01#


On 10/2/07, Frank Ruiz [EMAIL PROTECTED] wrote:
 Greetings,

 I patched squid2.6 stable 14 with the tcp probe patch.

 It patched two files:

 cache_cf.c
 neighbors.c

 However, After about 14 hours of good runtime, my response times,
 began to suck, and began to see errors again indicative of the tcp
 probe issue:

 2007/10/02 01:57:15| Detected REVIVED Parent: 10.10.10.20
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
 2007/10/02 01:57:16| Detected DEAD Parent: 10.10.10.20
 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed


 The origin server is available, however I keep getting
 revivied/connectionfailed/dead

 It seems that the only way to recover from this is a restart.

 I am running solaris 10, and I had to download the gnu patch utility
 in order to patch the src.

 Here was the patch applied.

 Index: src/cache_cf.c
 ===
 RCS file: /cvsroot/squid/squid/src/cache_cf.c,v
 retrieving revision 1.470
 diff -u -p -r1.470 cache_cf.c
 --- src/cache_cf.c  20 Jul 2007 21:08:47 -  1.470
 +++ src/cache_cf.c  28 Aug 2007 23:46:47 -
 @@ -1621,6 +1621,7 @@ parse_peer(peer ** head)
 p-stats.logged_state = PEER_ALIVE;
 p-monitor.state = PEER_ALIVE;
 p-monitor.interval = 300;
 +p-tcp_up = PEER_TCP_MAGIC_COUNT;
 if ((token = strtok(NULL, w_space)) == NULL)
self_destruct();
 p-host = xstrdup(token);
 Index: src/neighbors.c
 ===
 RCS file: /cvsroot/squid/squid/src/neighbors.c,v
 retrieving revision 1.318
 diff -u -p -r1.318 neighbors.c
 --- src/neighbors.c 20 Jul 2007 21:08:47 -  1.318
 +++ src/neighbors.c 28 Aug 2007 23:46:47 -
 @@ -1010,12 +1010,13 @@ peerDNSConfigure(const ipcache_addrs * i
debug(0, 0) (WARNING: No IP address found for '%s'!\n, p-host);
return;
 }
 -p-tcp_up = PEER_TCP_MAGIC_COUNT;
 for (j = 0; j  (int) ia-count  j  PEER_MAX_ADDRESSES; j++) {
p-addresses[j] = ia-in_addrs[j];
debug(15, 2) (-- IP address #%d: %s\n, j, 
 inet_ntoa(p-addresses[j]))
 ;
p-n_addresses++;
 }
 +if (!p-tcp_up)
 +   peerProbeConnect((peer *) p);
 ap = p-in_addr;
 memset(ap, '\0', sizeof(struct sockaddr_in));
 ap-sin_family = AF_INET;

 Any ideas is much appreciated. Any special debug info you need, please
 let me know.

 Also, as I side note, I have monitorurl set as well

 cache_peer 10.10.10.20 parent 80 0 no-query no-digest originserver
 monitorinterval=30 monitorurl=http://10.10.10.20/test.jpg

 Thank you!



[squid-users] swapping/resource

2007-09-28 Thread Frank Ruiz
Greetings,

I have a system with 32GB of RAM, and I have my squid cache configured
as an in memory cache only.

Currently Squid cache mem is set to:
cache_mem 18000 MB

Cache dir is set to :
cache_dir null /ebay/local/var/nocache

Two things I am hoping to do is:

Reduce swap.
It appears that for every GIG of physical mem used, there is a
corresponding amount of swap that is also allocated.

i.e. squid using 27G of RAM, and also 27G of swap is allocated. There
is nothing else running on this box other than squid.

Reduce CPU utilization. Currently CPU is maxed out.

I am using Solaris 10 btw.

Any ideas are greatly appreciated.

Thank you.


[squid-users] Allow Referrer

2007-09-05 Thread Frank Ruiz
Greetings Squidlings ;0),

I need to retain the referrer in the http header of an incoming client request.

client (with referrer in http request) - squid - 3rd party

The 3rd party needs to see the referrer portion of the http header.

Does this require anything special?

Thank you


[squid-users] diskd question

2007-08-29 Thread Frank Ruiz
Greetings,

So I am using local disk for my cache. This consists of a 500G SATA drive.

My cache size is 50G.

I tried using a queue size of Q1=72 and Q1=64, however it looks like I
am still I/O constrained with http requests taking up to 11 seconds.

I am using UFS. Logging, and access time have been disabled.

I am now running at:
Q1=12 Q2=10

Does anyone happen to have any suggestions?

Thanks!


[squid-users] repopulate cache?

2007-08-25 Thread Frank Ruiz
Greetings,

I am not too sure if this is possible, but it would be a nice to have if not.

I am using an all in memory cache now. cache_dir is set to null.

However, if the system reboots, I lose my cache, and have to rebuild,
taking a toll on the origins.

Is there a way to flush an in memory cache to disk, and use that data
to populate another populate another in memory cache?

The data is dynamic, so I would most likely flush to disk once a day
if this is possible.

What I am looking for is some way to replicate an in memory cache to
another host.

Thanks.


Re: [squid-users] Failed to select source

2007-08-24 Thread Frank Ruiz
Any happen to know when bug 1972 will be fixed...? ;0)

If I downrev to prestable12 will I still be impacted?

On 8/24/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 On tor, 2007-08-23 at 09:57 -0700, Frank Ruiz wrote:
  Greetings,
 
  I have a squid cache that was working fine for a couple of days, and
  then I get the following:
 
  2007/08/23 09:53:40| Failed to select source for
  'http://url.somehost.com/url.file;
  2007/08/23 09:53:40|   always_direct = 0
  2007/08/23 09:53:40|never_direct = 0
  2007/08/23 09:53:40|timedout = 0


 Is this A reverse proxy?

 Perhaps you are bitten by bug #1972.

 Regards
 Henrik




[squid-users] Failed to select source

2007-08-23 Thread Frank Ruiz
Greetings,

I have a squid cache that was working fine for a couple of days, and
then I get the following:

2007/08/23 09:53:40| Failed to select source for
'http://url.somehost.com/url.file;
2007/08/23 09:53:40|   always_direct = 0
2007/08/23 09:53:40|never_direct = 0
2007/08/23 09:53:40|timedout = 0


I am able to retrieve the file using

telnet url.somehost.com 80
GET /url.file HTTP/1.0



Does anyone know what could be causing this issue?

Thanks in advance.


[squid-users] Origin server timeout

2007-08-21 Thread Frank Ruiz
Greetings,

I have an origin server that timeous up for up to 1 minute during
extreme conditions. As a result, time_wait connections on the squid
host go through the roof.

Can someone please recommend what tunables can be adjusted to fix this?

I currently have:
peer_connect_timeout

set to:
5 seconds



Is there anything else that can be done, or should I further reduce the timeout?

Thank you in advance!


[squid-users] Timeout values

2007-08-06 Thread Frank Ruiz
Greetings,

I am looking for some recommendations on ideal timeout values for a
squid cache serving up many images per second (1000+).

Also, I would like client connections to automatically close after 10
seconds. Anyone happen to know where this is set within squid?

Thanks