Re: [squid-users] QUESTION ABOUT CHOICE BETWEEN SQUID 2.7 or 3.1.8

2010-09-22 Thread c0re
I have the opposite opinion with experience of using c-icap on
freebsd. It was unstable and was almost not possible to debug
problems. And I moved to squidclamav. I'm using 3.1.8 squid.
It was about 4-5 month later, squid sometimes lose connection to
c-icap, while c-icap was ok, icapclient says that c-icap working, but
squid show error icap timeout or something like this. Only
restarting c-icap AND squid solves that problem for some time. Than it
happen again. I was tired of looking for problem in unreadable c-icap
debug and no messages at all in squid logs about icap, just timeout,
so I moved to squidclamav and it works pretty good!

2010/9/20 Henrik Nordström hen...@henriknordstrom.net:
 mån 2010-09-20 klockan 19:28 +0200 skrev patrick.la...@inserm.fr:

 Hello

 First of all congratulations for your great work!

 I have one question for you please.

 I set up two squid proxy with WCCP (+squidclamav and squidguard)but I'm
 reinstalling all under 2 esxi vmware (vmware isn't the problem here).It
 would be better that I install the latest version 3.1.8 or version 2.7?
 version 3.1.8 of 20100920 it is a stable version?

 I would use 3.1.8. Allows you to replace squidclamav with c-icap +
 clamav for a better virus scanning experience.

 Regards
 Henrik




[squid-users] Utorrrent through squid

2010-09-22 Thread GIGO .

Hi all,
 
I am unable to run utorrent software through squid proxy due to ipv6 tracker 
failure.I am unable to connect to an ipv 6 tracker.
 
1285141356.609152 10.1.97.27 TCP_MISS/504 1587 GET 
http://ipv6.torrent.ubuntu.com:6969/announce? - DIRECT/ipv6.torrent.ubuntu.com 
text/html [Host: ipv6.torrent.ubuntu.com:6969\r\nUser-Agent: 
uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 504 Gateway 
Time-out\r\nServer: squid\r\nDate: Wed, 22 Sep 2010 07:42:36 
GMT\r\nContent-Type: text/html\r\nContent-Length: 1234\r\nX-Squid-Error: 
ERR_DNS_FAIL 0\r\nX-Cache: MISS from xyz.com\r\nX-Cache-Lookup: MISS from 
xyz.com:8080\r\nVia: 1.0 xyz.com:8080 (squid)\r\nConnection: close\r\n\r]

I am using squid 2.7 Stable 9 release.
 
 
For doing this is there a special configuration required on the Operating 
system(RHEL 5 ) or squid itself. Please guide.
 
regards,
 
Bilal Aslam   

[squid-users] squid splashpage

2010-09-22 Thread Han Boetes
 Hi,

I installed squid and used this page to set up a splash page:

  http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

This works like expected, except that the  customer  wants  it  to  work
slightly different.

1) He wants the splash page to be displayed every hour, independent from
   the fact that they keep browsing or not. How can I set that up?

2) Also I noticed from the logs that for example windows update which is
   running on almost every computer in the universe right now  is  using
   http and since it's connecting as soon as a connection is established
   it will trigger the splash page and thus causes the client not to see
   the splashpage. How can make sure only browsers  are  triggering  the
   splash page?



Met vriendelijke groet,


Han Boetes



RE: [squid-users] Utorrrent through squid

2010-09-22 Thread GIGO .

So Amos does this means that downloading of torrents with earlier version of 
squid is not possible at all?
 
 
regards,
Bilal 



 Date: Wed, 22 Sep 2010 20:27:29 +1200
 From: squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Utorrrent through squid

 On 22/09/10 19:56, GIGO . wrote:

 Hi all,

 I am unable to run utorrent software through squid proxy due to ipv6 tracker 
 failure.I am unable to connect to an ipv 6 tracker.

 1285141356.609 152 10.1.97.27 TCP_MISS/504 1587 GET 
 http://ipv6.torrent.ubuntu.com:6969/announce? - 
 DIRECT/ipv6.torrent.ubuntu.com text/html [Host: 
 ipv6.torrent.ubuntu.com:6969\r\nUser-Agent: 
 uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 504 Gateway 
 Time-out\r\nServer: squid\r\nDate: Wed, 22 Sep 2010 07:42:36 
 GMT\r\nContent-Type: text/html\r\nContent-Length: 1234\r\nX-Squid-Error: 
 ERR_DNS_FAIL 0\r\nX-Cache: MISS from xyz.com\r\nX-Cache-Lookup: MISS from 
 xyz.com:8080\r\nVia: 1.0 xyz.com:8080 (squid)\r\nConnection: close\r\n\r]

 I am using squid 2.7 Stable 9 release.


 Squid-3.1 is required for IPv4/IPv6 gateway.


 For doing this is there a special configuration required on the Operating 
 system(RHEL 5 ) or squid itself. Please guide.


 http://wiki.squid-cache.org/KnowledgeBase/RedHat


 Amos
 --
 Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.8
 Beta testers wanted for 3.2.0.2 

Re: [squid-users] Frequent assertion failed: forward.cc:500: server_fd == fd errors

2010-09-22 Thread Henrik Nordström
tis 2010-09-21 klockan 22:28 +0200 skrev Ralf Hildebrandt:
 On all of my 4 squid servers I'm getting these assertion failures:


 2010/09/21 22:16:08| statusIfComplete: Request not yet fully sent POST 
 http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi;
 2010/09/21 22:16:08| assertion failed: forward.cc:500: server_fd == fd

PLease get a backtrace and file a bug report.

I have a rough idea in what area the problem is. The first message is
seen when the server responds to a POST request before the request body
have been sent.

Regards
Henrik



[squid-users] client identifier in squid logs

2010-09-22 Thread Shoebottom, Bryan
Hello,

I have an interception proxy configuration using WCCP and a Cisco
router.  PAT/NAT happens on a device before the proxy, so my logs show
only the public IPs.

*Inet*
  |
Router---Proxy
  |
Firewall (PAT/NAT)
  |
*internal private network*


I checked the HTTP header, but can't find any host identifier info
there.  Without changing the placement of the proxy or moving away from
the interception configuration, am I able to get the internal IP of the
clients added to my logs?


I know this is a far stretch, but I'm hopeful someone else is in this
predicament and has come up with a solution/workaround.



--
Thanks,

Bryan Shoebottom
Network  Systems Specialist
Network Services  Computer Operations
Fanshawe College
Phone:  (519) 452-4430 x4904
Fax:  (519) 453-3231
bshoebot...@fanshawec.ca




Re: [squid-users] Problem restarting/stopping squid

2010-09-22 Thread Sergio Belkin
2010/9/16 Amos Jeffries squ...@treenet.co.nz:
 On 17/09/10 01:46, Sergio Belkin wrote:

 2010/9/16 Peter Albrechtalbre...@opensourceservices.de:

 Hi Sergio,

 I use squid squid-2.6.STABLE21-3.el5 on CentOS 5.4. The problem is
 that squid can't be restarted and rotate isnt working, I mean log
 rotating is done but I have to start  the service by hand.


 /var/log/squid/store.log {

 Do you actually make use of store.log for anything?
 It's primarily a cache debugging log and most installs can configure it not
 to be created.

 Amos
 --

I think that I found the cause of problem. Since I was rotating on a
different times each log, only executed squid -k rotate when it
rotated the store.log, but it didn't when it made the access.log and
cache log. So I've append
postrotate
  /usr/sbin/squid -k rotate
endscript

at the end of both access.log and cache.log sections.

Thanks
-- 
--
Sergio Belkin http://www.sergiobelkin.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


Re: [squid-users] Problem restarting/stopping squid

2010-09-22 Thread Amos Jeffries

On 23/09/10 03:06, Sergio Belkin wrote:

2010/9/16 Amos Jeffriessqu...@treenet.co.nz:

On 17/09/10 01:46, Sergio Belkin wrote:


2010/9/16 Peter Albrechtalbre...@opensourceservices.de:


Hi Sergio,


I use squid squid-2.6.STABLE21-3.el5 on CentOS 5.4. The problem is
that squid can't be restarted and rotate isnt working, I mean log
rotating is done but I have to start  the service by hand.



I think that I found the cause of problem. Since I was rotating on a
different times each log, only executed squid -k rotate when it
rotated the store.log, but it didn't when it made the access.log and
cache log. So I've append
postrotate
   /usr/sbin/squid -k rotate
 endscript

at the end of both access.log and cache.log sections.



Careful that this does not make squid overwrite log data.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Utorrrent through squid

2010-09-22 Thread Amos Jeffries

On 22/09/10 22:43, GIGO . wrote:


So Amos does this means that downloading of torrents with earlier version of 
squid is not possible at all?


No, its perfectly possible with IPv4 trackers.

His specific problem was with IPv6-only trackers.




Date: Wed, 22 Sep 2010 20:27:29 +1200
Subject: Re: [squid-users] Utorrrent through squid

On 22/09/10 19:56, GIGO . wrote:


Hi all,

I am unable to run utorrent software through squid proxy due to ipv6 tracker 
failure.I am unable to connect to an ipv 6 tracker.

1285141356.609 152 10.1.97.27 TCP_MISS/504 1587 GET 
http://ipv6.torrent.ubuntu.com:6969/announce? - DIRECT/ipv6.torrent.ubuntu.com 
text/html [Host: ipv6.torrent.ubuntu.com:6969\r\nUser-Agent: 
uTorrent/2040(21586)\r\nAccept-Encoding: gzip\r\n] [HTTP/1.0 504 Gateway 
Time-out\r\nServer: squid\r\nDate: Wed, 22 Sep 2010 07:42:36 
GMT\r\nContent-Type: text/html\r\nContent-Length: 1234\r\nX-Squid-Error: 
ERR_DNS_FAIL 0\r\nX-Cache: MISS from xyz.com\r\nX-Cache-Lookup: MISS from 
xyz.com:8080\r\nVia: 1.0 xyz.com:8080 (squid)\r\nConnection: close\r\n\r]

I am using squid 2.7 Stable 9 release.



Squid-3.1 is required for IPv4/IPv6 gateway.



For doing this is there a special configuration required on the Operating 
system(RHEL 5 ) or squid itself. Please guide.



http://wiki.squid-cache.org/KnowledgeBase/RedHat




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Frequent assertion failed: forward.cc:500: server_fd == fd errors

2010-09-22 Thread Ralf Hildebrandt
* Henrik Nordström hen...@henriknordstrom.net:
 tis 2010-09-21 klockan 22:28 +0200 skrev Ralf Hildebrandt:
  On all of my 4 squid servers I'm getting these assertion failures:
 
 
  2010/09/21 22:16:08| statusIfComplete: Request not yet fully sent POST 
  http://setiboinc.ssl.berkeley.edu/sah_cgi/cgi;
  2010/09/21 22:16:08| assertion failed: forward.cc:500: server_fd == fd
 
 PLease get a backtrace and file a bug report.
 
 I have a rough idea in what area the problem is. The first message is
 seen when the server responds to a POST request before the request body
 have been sent.

It's been filed as bug 3063!

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Problem restarting/stopping squid

2010-09-22 Thread Sergio Belkin
2010/9/22 Amos Jeffries squ...@treenet.co.nz:
 On 23/09/10 03:06, Sergio Belkin wrote:

 2010/9/16 Amos Jeffriessqu...@treenet.co.nz:

 On 17/09/10 01:46, Sergio Belkin wrote:

 2010/9/16 Peter Albrechtalbre...@opensourceservices.de:

 Hi Sergio,

 I use squid squid-2.6.STABLE21-3.el5 on CentOS 5.4. The problem is
 that squid can't be restarted and rotate isnt working, I mean log
 rotating is done but I have to start  the service by hand.


 I think that I found the cause of problem. Since I was rotating on a
 different times each log, only executed squid -k rotate when it
 rotated the store.log, but it didn't when it made the access.log and
 cache log. So I've append
 postrotate
       /usr/sbin/squid -k rotate
     endscript

 at the end of both access.log and cache.log sections.


 Careful that this does not make squid overwrite log data.


Why do you say that? Could that happen? Stupid question:  What does
'squid -k rotate' really do?

Thanks in advance
-- 
--
Sergio Belkin http://www.sergiobelkin.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


RE: Re: [squid-users] SSL between squid and client possible?

2010-09-22 Thread Bucci, David G
Fyi, as a workaround till the browsers do cleanly support SSL to a proxy, we 
used stunnel to accomplish exactly this, securing the traffic between the 
client and Squid.  In our case, we have Squid running on a Windows server, and 
the SSL support wasn't stable for us, so for that reason (and other reasons I 
won't go into), we run stunnel on both ends -- but likely it would work just as 
well to simply point the workstation's stunnel directly at a Squid SSL port.

Working like a charm.  Glad to provide more details if it's of interest.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, September 21, 2010 10:34 PM
To: squid-users@squid-cache.org
Subject: EXTERNAL: Re: [squid-users] SSL between squid and client possible?

On Tue, 21 Sep 2010 16:39:53 -0700, David Parks davidpark...@yahoo.com
wrote:
 Can SSL be enabled between client and squid?
 Example: An HTTP request to http://yahoo.com goes over SSL from client
to
 squid proxy, then standard HTTP from squid to yahoo and again secured
from
 squid to client on the way back?
 It seems like this is only possible with reverse proxy setups, not
typical
 proxy forward traffic.
 Just wanted to verify my understanding here.
 Thanks,
 David

Squid will do this happily. https_port is the same as http_port but
requires SSL/TLS on the link.

The problem is that most web browsers won't do the SSL/TLS when talking to
an HTTP proxy. Please assist with bugging the browser devs about this.
https://bugzilla.mozilla.org/show_bug.cgi?id=378637.  There are
implications that they might do HTTP-over-SSL to SSL proxies, but certainly
will send non-HTTP there and break those protocols instead.

Amos


[squid-users] squid not storing objects to disk and getting RELEASED on the fly

2010-09-22 Thread Rajkumar Seenivasan
I have a strange issue happening with my squid (v 3.1.8)
2 squid servers with sibling - sibling setup in accel mode.

after running the squid for 2 to 3 days, the HIT rate has gone down.
from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

store.log shows that even fresh requests are NOT getting stored onto
disk and getting RELEASED rightaway.
This issue is with both squids...

store.log entry:
1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
200 1285158415-1 1285230415 application/x-download -1/279307
GET http://
requests have a max-age of 20Hrs.

squid.conf:
cache_dir aufs /squid/var/cache 20480 16 256
cache_mem 1536 MB
memory_pools off
cache_swap_low 50
cache_swap_high 55
refresh_pattern . 0 20% 1440


filesystem is resizerfs with RAID-0. only 11GB used for the cache.

$cat /proc/sys/fs/file-nr
640 0   1525202

$ cat /proc/sys/fs/file-max
1525202

Any help is highly appreciated.

thanks.


Re: [squid-users] squid not storing objects to disk and getting RELEASED on the fly

2010-09-22 Thread Chad Naugle
What is your cache_replacement_policy directive set to?

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 1:55 PM 
I have a strange issue happening with my squid (v 3.1.8)
2 squid servers with sibling - sibling setup in accel mode.

after running the squid for 2 to 3 days, the HIT rate has gone down.
from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

store.log shows that even fresh requests are NOT getting stored onto
disk and getting RELEASED rightaway.
This issue is with both squids...

store.log entry:
1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
200 1285158415-1 1285230415 application/x-download -1/279307
GET http://
requests have a max-age of 20Hrs.

squid.conf:
cache_dir aufs /squid/var/cache 20480 16 256
cache_mem 1536 MB
memory_pools off
cache_swap_low 50
cache_swap_high 55
refresh_pattern . 0 20% 1440


filesystem is resizerfs with RAID-0. only 11GB used for the cache.

$cat /proc/sys/fs/file-nr
640 0   1525202

$ cat /proc/sys/fs/file-max
1525202

Any help is highly appreciated.

thanks.


Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information.  If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited.  If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments.
Thank you.


Re: [squid-users] squid not storing objects to disk and getting RELEASED on the fly

2010-09-22 Thread Rajkumar Seenivasan
I have the following for replacement policy...

cache_replacement_policy heap LFUDA
memory_replacement_policy lru

thanks.

On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle chad.nau...@travimp.com wrote:
 What is your cache_replacement_policy directive set to?

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 1:55 PM 
 I have a strange issue happening with my squid (v 3.1.8)
 2 squid servers with sibling - sibling setup in accel mode.

 after running the squid for 2 to 3 days, the HIT rate has gone down.
 from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

 store.log shows that even fresh requests are NOT getting stored onto
 disk and getting RELEASED rightaway.
 This issue is with both squids...

 store.log entry:
 1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
 200 1285158415        -1 1285230415 application/x-download -1/279307
 GET http://
 requests have a max-age of 20Hrs.

 squid.conf:
 cache_dir aufs /squid/var/cache 20480 16 256
 cache_mem 1536 MB
 memory_pools off
 cache_swap_low 50
 cache_swap_high 55
 refresh_pattern . 0 20% 1440


 filesystem is resizerfs with RAID-0. only 11GB used for the cache.

 $cat /proc/sys/fs/file-nr
 640     0       1525202

 $ cat /proc/sys/fs/file-max
 1525202

 Any help is highly appreciated.

 thanks.


 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient and 
 may contain confidential or privileged information.  If you are not the 
 intended recipient, any disclosure, copying, use, or distribution of the 
 information included in this message and any attachments is prohibited.  If 
 you have received this communication in error, please notify us by reply 
 e-mail and immediately and permanently delete this message and any 
 attachments.
 Thank you.



Re: [squid-users] squid not storing objects to disk and gettingRELEASED on the fly

2010-09-22 Thread Chad Naugle
Perhaps you can try switching to heap GSDF, instead of heap LFUDA.  What are 
also your minimum_object_size versus your _maximum_object_size?

Perhaps you can also try setting the cache_swap_low / high back to default (90 
- 95) to see if that will make a difference.

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 2:05 PM 
I have the following for replacement policy...

cache_replacement_policy heap LFUDA
memory_replacement_policy lru

thanks.

On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle chad.nau...@travimp.com wrote:
 What is your cache_replacement_policy directive set to?

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 1:55 PM 
 I have a strange issue happening with my squid (v 3.1.8)
 2 squid servers with sibling - sibling setup in accel mode.

 after running the squid for 2 to 3 days, the HIT rate has gone down.
 from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

 store.log shows that even fresh requests are NOT getting stored onto
 disk and getting RELEASED rightaway.
 This issue is with both squids...

 store.log entry:
 1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
 200 1285158415-1 1285230415 application/x-download -1/279307
 GET http://
 requests have a max-age of 20Hrs.

 squid.conf:
 cache_dir aufs /squid/var/cache 20480 16 256
 cache_mem 1536 MB
 memory_pools off
 cache_swap_low 50
 cache_swap_high 55
 refresh_pattern . 0 20% 1440


 filesystem is resizerfs with RAID-0. only 11GB used for the cache.

 $cat /proc/sys/fs/file-nr
 640 0   1525202

 $ cat /proc/sys/fs/file-max
 1525202

 Any help is highly appreciated.

 thanks.


 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient and 
 may contain confidential or privileged information.  If you are not the 
 intended recipient, any disclosure, copying, use, or distribution of the 
 information included in this message and any attachments is prohibited.  If 
 you have received this communication in error, please notify us by reply 
 e-mail and immediately and permanently delete this message and any 
 attachments.
 Thank you.



Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information.  If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited.  If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments.
Thank you.


[squid-users] Cached vs Fetched stats?

2010-09-22 Thread Andrei
Is there a quick command or utility that would show me how much of the
content is fetched and how much is cached by the proxy? I have Cache
Manager and squidview installed.


[squid-users] Re: Re: Re: Squid 3.1.6, Kerberos and strange browser auth behavior

2010-09-22 Thread Markus Moeller


Aleksandar Ciric aciri...@yahoo.com wrote in message 
news:375975.43025...@web114214.mail.gq1.yahoo.com...

Gentoo Squid, IE browser

1. GET google
2. 407, Proxy-Authenticate: Negotiate\r\n
3. GET google, Proxy-Authorization: Negotiate token, NTLMSSP
4. 407, Proxy-Authenticate: Negotiate\r\n


Interesting. I thought Negotiate will use Kerberos first and then NTLM.


5. Pass Prompt (stays on after ack)
6. KRB5 AS-REQ/AS-REP, TGS-REQ/TGS-REP (with AD server)
7. GET google, Proxy-Authorization: Negotiate token, GSS-API (SPNEGO)


What does squid say here in the logfile ? If the token is complete it should 
already return 200 OK


If not 8. should return also a token after Negotiate.  Can you confirm that 
8. does not contain a GSSAPI token ?



8. 407, Proxy-Authenticate: Negotiate\r\n
pause (here I waited about a minute to type all this)
9. Ack the pass prompt again (same user/pass, it stays filled in)
10. KRB5 AS-REQ/AS-REP, TGS-REQ/TGS-REP (with AD server)
11. GET google, Proxy-Authorization: Negotiate token, GSS-API (SPNEGO)
12. 200 OK, Proxy-Authentication-Info: Negotiate

token in 7  11 is exactly the same, same pvno, as are kerberos ticket 
version numbers in 6 and 10.


There is no difference in 2, 4, 8 headerwise.

Apparently that pause removed the need for third time, however you can 
blitz through the entire process by acknowledging pass prompt 3x in a row, 
which would only add steps 6,78 once more.


Interesting is that a rather long pause (tried 30secs, needs about a 
minute) made all the difference.




Regards
Markus 





Re: [squid-users] squid not storing objects to disk and gettingRELEASED on the fly

2010-09-22 Thread Rajkumar Seenivasan
Thanks for the tip. I will try with heap GSDF to see if it makes a
difference.
Any idea why the object is not considered as a hot-object and stored in memory?

I have...
minimum_object_size 0 bytes
maximum_object_size 5120 KB

maximum_object_size_in_memory 1024 KB

Earlier we had cache_swap_low and high at 80 and 85% and the physical
memory usage went high leaving only 50MB free out of 15GB.
To fix this issue, the high and low were set to 50 and 55%.

Does this change in cache_replacement_policy and the cache_swap_low
/ high require a restart or just a -k reconfigure will do it?

Current usage: Top
top - 14:33:39 up 12 days, 21:44,  3 users,  load average: 0.03, 0.03, 0.00
Tasks:  83 total,   1 running,  81 sleeping,   1 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.6%st
Mem:  15736360k total, 14175056k used,  1561304k free,   283140k buffers
Swap: 25703960k total,   92k used, 25703868k free, 10692796k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
17442 squid 15   0 1821m 1.8g  14m S  0.3 11.7   4:03.23 squid


#free
 total   used   free sharedbuffers cached
Mem:  15736360   141751641561196  0 283160   10692864
-/+ buffers/cache:3199140   12537220
Swap: 25703960 92   25703868


Thanks.


On Wed, Sep 22, 2010 at 2:16 PM, Chad Naugle chad.nau...@travimp.com wrote:
 Perhaps you can try switching to heap GSDF, instead of heap LFUDA.  What are 
 also your minimum_object_size versus your _maximum_object_size?

 Perhaps you can also try setting the cache_swap_low / high back to default 
 (90 - 95) to see if that will make a difference.

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 2:05 PM 
 I have the following for replacement policy...

 cache_replacement_policy heap LFUDA
 memory_replacement_policy lru

 thanks.

 On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle chad.nau...@travimp.com wrote:
 What is your cache_replacement_policy directive set to?

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 1:55 PM 
 I have a strange issue happening with my squid (v 3.1.8)
 2 squid servers with sibling - sibling setup in accel mode.

 after running the squid for 2 to 3 days, the HIT rate has gone down.
 from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

 store.log shows that even fresh requests are NOT getting stored onto
 disk and getting RELEASED rightaway.
 This issue is with both squids...

 store.log entry:
 1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
 200 1285158415        -1 1285230415 application/x-download -1/279307
 GET http://
 requests have a max-age of 20Hrs.

 squid.conf:
 cache_dir aufs /squid/var/cache 20480 16 256
 cache_mem 1536 MB
 memory_pools off
 cache_swap_low 50
 cache_swap_high 55
 refresh_pattern . 0 20% 1440


 filesystem is resizerfs with RAID-0. only 11GB used for the cache.

 $cat /proc/sys/fs/file-nr
 640     0       1525202

 $ cat /proc/sys/fs/file-max
 1525202

 Any help is highly appreciated.

 thanks.


 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient and 
 may contain confidential or privileged information.  If you are not the 
 intended recipient, any disclosure, copying, use, or distribution of the 
 information included in this message and any attachments is prohibited.  If 
 you have received this communication in error, please notify us by reply 
 e-mail and immediately and permanently delete this message and any 
 attachments.
 Thank you.



 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient and 
 may contain confidential or privileged information.  If you are not the 
 intended recipient, any disclosure, copying, use, or distribution of the 
 information included in this message and any attachments is prohibited.  If 
 you have received this communication in error, please notify us by reply 
 e-mail and immediately and permanently delete this message and any 
 attachments.
 Thank you.



Re: [squid-users] squid not storing objects to disk andgettingRELEASED on the fly

2010-09-22 Thread Chad Naugle
With that large array of RAM I would increase those maximum numbers, to let's 
say, 8 MB, 16 MB, 32 MB, especially if you plan on using heap LFUDA, which is 
optimized for storing larger objects, and trashes smaller objects faster, where 
heap GSDF is the opposite, using LRU for memory for the large objects to offset 
the difference.

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 3:01 PM 
Thanks for the tip. I will try with heap GSDF to see if it makes a
difference.
Any idea why the object is not considered as a hot-object and stored in memory?

I have...
minimum_object_size 0 bytes
maximum_object_size 5120 KB

maximum_object_size_in_memory 1024 KB

Earlier we had cache_swap_low and high at 80 and 85% and the physical
memory usage went high leaving only 50MB free out of 15GB.
To fix this issue, the high and low were set to 50 and 55%.

Does this change in cache_replacement_policy and the cache_swap_low
/ high require a restart or just a -k reconfigure will do it?

Current usage: Top
top - 14:33:39 up 12 days, 21:44,  3 users,  load average: 0.03, 0.03, 0.00
Tasks:  83 total,   1 running,  81 sleeping,   1 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.6%st
Mem:  15736360k total, 14175056k used,  1561304k free,   283140k buffers
Swap: 25703960k total,   92k used, 25703868k free, 10692796k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
17442 squid 15   0 1821m 1.8g  14m S  0.3 11.7   4:03.23 squid


#free
 total   used   free sharedbuffers cached
Mem:  15736360   141751641561196  0 283160   10692864
-/+ buffers/cache:3199140   12537220
Swap: 25703960 92   25703868


Thanks.


On Wed, Sep 22, 2010 at 2:16 PM, Chad Naugle chad.nau...@travimp.com wrote:
 Perhaps you can try switching to heap GSDF, instead of heap LFUDA.  What are 
 also your minimum_object_size versus your _maximum_object_size?

 Perhaps you can also try setting the cache_swap_low / high back to default 
 (90 - 95) to see if that will make a difference.

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 2:05 PM 
 I have the following for replacement policy...

 cache_replacement_policy heap LFUDA
 memory_replacement_policy lru

 thanks.

 On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle chad.nau...@travimp.com wrote:
 What is your cache_replacement_policy directive set to?

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 1:55 PM 
 I have a strange issue happening with my squid (v 3.1.8)
 2 squid servers with sibling - sibling setup in accel mode.

 after running the squid for 2 to 3 days, the HIT rate has gone down.
 from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

 store.log shows that even fresh requests are NOT getting stored onto
 disk and getting RELEASED rightaway.
 This issue is with both squids...

 store.log entry:
 1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
 200 1285158415-1 1285230415 application/x-download -1/279307
 GET http://
 requests have a max-age of 20Hrs.

 squid.conf:
 cache_dir aufs /squid/var/cache 20480 16 256
 cache_mem 1536 MB
 memory_pools off
 cache_swap_low 50
 cache_swap_high 55
 refresh_pattern . 0 20% 1440


 filesystem is resizerfs with RAID-0. only 11GB used for the cache.

 $cat /proc/sys/fs/file-nr
 640 0   1525202

 $ cat /proc/sys/fs/file-max
 1525202

 Any help is highly appreciated.

 thanks.


 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient and 
 may contain confidential or privileged information.  If you are not the 
 intended recipient, any disclosure, copying, use, or distribution of the 
 information included in this message and any attachments is prohibited.  If 
 you have received this communication in error, please notify us by reply 
 e-mail and immediately and permanently delete this message and any 
 attachments.
 Thank you.



 Travel Impressions made the following annotations
 -
 This message and any attachments are solely for the intended recipient and 
 may contain confidential or privileged information.  If you are not the 
 intended recipient, any disclosure, copying, use, or distribution of the 
 information included in this message and any attachments is prohibited.  If 
 you have received this communication in error, please notify us by reply 
 e-mail and immediately and permanently delete this message and any 
 attachments.
 Thank you.




[squid-users] One slow Website Through Proxy

2010-09-22 Thread Dean Weimer
I am running squid 3.1.8, and have one website that pauses for about 1 to 2 
minutes before loading.  The website is www.pb.com (PitneyBowes).  There are no 
errors logged in the cache.log file, and nothing unusual in the access.log 
file.  I have even done network packet captures and don't see anything unusual. 
 The website responds fine when bypassing the proxy and every other website 
appears to be fine through the proxy server.

I have tested with both IE and Firefox, using my default wpad.dat script with 
auto detect and manually specifying the proxy server with no change.  And even 
tried turning HTTP/1.1 through proxy servers on and off at the browser, nothing 
seems to affect its behavior.

Can any of you confirm whether or not this website is slow through your setups, 
or have any idea what could be causing this issue? 

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co
 Phone: (660) 269-3448
 Fax: (660) 269-3950



Re: [squid-users] One slow Website Through Proxy

2010-09-22 Thread Chad Naugle
I am not sure what is causing the issue, but in my own test, IE8 performed 
SLOOO by far (Using the PROD Proxy), where under Firefox 3.5.13 (Using my 
DEV Proxy), the site was almost instantly available while the IE8 was STILL 
loading the same page.  After the first load, my PROD Proxy under IE8 loaded 
considerably faster, but not anywhere close to as fast as with Firefox 3.5.13, 
for the first attempt.


-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


 Dean Weimer dwei...@orscheln.com 9/22/2010 3:13 PM 
I am running squid 3.1.8, and have one website that pauses for about 1 to 2 
minutes before loading.  The website is www.pb.com (PitneyBowes).  There are no 
errors logged in the cache.log file, and nothing unusual in the access.log 
file.  I have even done network packet captures and don't see anything unusual. 
 The website responds fine when bypassing the proxy and every other website 
appears to be fine through the proxy server.

I have tested with both IE and Firefox, using my default wpad.dat script with 
auto detect and manually specifying the proxy server with no change.  And even 
tried turning HTTP/1.1 through proxy servers on and off at the browser, nothing 
seems to affect its behavior.

Can any of you confirm whether or not this website is slow through your setups, 
or have any idea what could be causing this issue? 

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co
 Phone: (660) 269-3448
 Fax: (660) 269-3950



Travel Impressions made the following annotations
-
This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information.  If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited.  If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments.
Thank you.


Re: [squid-users] Problem restarting/stopping squid

2010-09-22 Thread Amos Jeffries
On Wed, 22 Sep 2010 13:49:01 -0300, Sergio Belkin seb...@gmail.com
wrote:
 2010/9/22 Amos Jeffries squ...@treenet.co.nz:
 On 23/09/10 03:06, Sergio Belkin wrote:

 2010/9/16 Amos Jeffriessqu...@treenet.co.nz:

 On 17/09/10 01:46, Sergio Belkin wrote:

 2010/9/16 Peter Albrechtalbre...@opensourceservices.de:

 Hi Sergio,

 I use squid squid-2.6.STABLE21-3.el5 on CentOS 5.4. The problem is
 that squid can't be restarted and rotate isnt working, I mean
log
 rotating is done but I have to start  the service by hand.


 I think that I found the cause of problem. Since I was rotating on a
 different times each log, only executed squid -k rotate when it
 rotated the store.log, but it didn't when it made the access.log and
 cache log. So I've append
 postrotate
   /usr/sbin/squid -k rotate
 endscript

 at the end of both access.log and cache.log sections.


 Careful that this does not make squid overwrite log data.

 
 Why do you say that? Could that happen? Stupid question:  What does
 'squid -k rotate' really do?

It:
  schedules helpers to restart and release their cache.log connections
  renames all log files N to N+1 (for logfile_rotate number of files)
  releases the log file descriptors
  re-opens the un-numbered log files
  begin writing again from the file start

With logfile_rotate set at 0 and logrotate.d calling -k rotate from two
differently timed actions you likely end up with access.log being rotated
by rotate.d then *both* logs released and re-started by squid. This is not
as bad as the opposite case when access.log gets reset by cache.log
rotation.

Amos


Re: [squid-users] One slow Website Through Proxy

2010-09-22 Thread Amos Jeffries
On Wed, 22 Sep 2010 16:00:32 -0400, Chad Naugle
chad.nau...@travimp.com
wrote:
 I am not sure what is causing the issue, but in my own test, IE8
performed
 SLOOO by far (Using the PROD Proxy), where under Firefox 3.5.13
(Using
 my DEV Proxy), the site was almost instantly available while the IE8 was
 STILL loading the same page.  After the first load, my PROD Proxy under
IE8
 loaded considerably faster, but not anywhere close to as fast as with
 Firefox 3.5.13, for the first attempt.
 
 
 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.
  
 
 
 Dean Weimer dwei...@orscheln.com 9/22/2010 3:13 PM 
 I am running squid 3.1.8, and have one website that pauses for about 1
to
 2 minutes before loading.  The website is www.pb.com (PitneyBowes). 
There
 are no errors logged in the cache.log file, and nothing unusual in the
 access.log file.  I have even done network packet captures and don't see
 anything unusual.  The website responds fine when bypassing the proxy
and
 every other website appears to be fine through the proxy server.
 
 I have tested with both IE and Firefox, using my default wpad.dat script
 with auto detect and manually specifying the proxy server with no
change. 
 And even tried turning HTTP/1.1 through proxy servers on and off at the
 browser, nothing seems to affect its behavior.
 
 Can any of you confirm whether or not this website is slow through your
 setups, or have any idea what could be causing this issue? 
 

The www.pb.com domain times out while resolving  DNS records instead
of returning NXDOMAIN or SERVFAIL response. Default DNS timeout is 2
minutes. After which Squid will use the A results to fetch the page.

Amos


Re: [squid-users] squid not storing objects to disk andgettingRELEASED on the fly

2010-09-22 Thread Amos Jeffries
On Wed, 22 Sep 2010 15:09:31 -0400, Chad Naugle
chad.nau...@travimp.com
wrote:
 With that large array of RAM I would increase those maximum numbers, to
 let's say, 8 MB, 16 MB, 32 MB, especially if you plan on using heap
LFUDA,
 which is optimized for storing larger objects, and trashes smaller
objects
 faster, where heap GSDF is the opposite, using LRU for memory for the
large
 objects to offset the difference.
 
 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.
  
 
 
 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 3:01 PM 
 Thanks for the tip. I will try with heap GSDF to see if it makes a
 difference.
 Any idea why the object is not considered as a hot-object and stored in
 memory?

see below.

 
 I have...
 minimum_object_size 0 bytes
 maximum_object_size 5120 KB
 
 maximum_object_size_in_memory 1024 KB
 
 Earlier we had cache_swap_low and high at 80 and 85% and the physical
 memory usage went high leaving only 50MB free out of 15GB.
 To fix this issue, the high and low were set to 50 and 55%.

? 50% empty cache required so as not to fill RAM? = cache is too big or
RAM not enough.

 
 Does this change in cache_replacement_policy and the cache_swap_low
 / high require a restart or just a -k reconfigure will do it?
 
 Current usage: Top
 top - 14:33:39 up 12 days, 21:44,  3 users,  load average: 0.03, 0.03,
0.00
 Tasks:  83 total,   1 running,  81 sleeping,   1 stopped,   0 zombie
 Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si, 
 0.6%st
 Mem:  15736360k total, 14175056k used,  1561304k free,   283140k buffers
 Swap: 25703960k total,   92k used, 25703868k free, 10692796k cached
 
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 17442 squid 15   0 1821m 1.8g  14m S  0.3 11.7   4:03.23 squid
 
 
 #free
  total   used   free sharedbuffers
cached
 Mem:  15736360   141751641561196  0 283160  
10692864
 -/+ buffers/cache:3199140   12537220
 Swap: 25703960 92   25703868
 
 
 Thanks.
 
 
 On Wed, Sep 22, 2010 at 2:16 PM, Chad Naugle chad.nau...@travimp.com
 wrote:
 Perhaps you can try switching to heap GSDF, instead of heap LFUDA. 
What
 are also your minimum_object_size versus your _maximum_object_size?

 Perhaps you can also try setting the cache_swap_low / high back to
 default (90 - 95) to see if that will make a difference.

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 2:05 PM 
 I have the following for replacement policy...

 cache_replacement_policy heap LFUDA
 memory_replacement_policy lru

 thanks.

 On Wed, Sep 22, 2010 at 2:00 PM, Chad Naugle chad.nau...@travimp.com
 wrote:
 What is your cache_replacement_policy directive set to?

 -
 Chad E. Naugle
 Tech Support II, x. 7981
 Travel Impressions, Ltd.



 Rajkumar Seenivasan rkcp...@gmail.com 9/22/2010 1:55 PM 
 I have a strange issue happening with my squid (v 3.1.8)
 2 squid servers with sibling - sibling setup in accel mode.

What was the version in use before this happened? 3.1.8 okay for a while?
or did it start discarding right at the point of upgrade from another?


 after running the squid for 2 to 3 days, the HIT rate has gone down.
 from 50% HIT to 34% for TCP and from 34% HIT to 12% for UDP.

 store.log shows that even fresh requests are NOT getting stored onto
 disk and getting RELEASED rightaway.
 This issue is with both squids...

 store.log entry:
 1285176036.341 RELEASE -1  7801460962DF9DCA15DE95562D3997CB
 200 1285158415-1 1285230415 application/x-download -1/279307
 GET http://
 requests have a max-age of 20Hrs.

Server advertised the content-length as unknown then sent 279307 bytes.
(-1/279307) Squid is forced to store it to disk immediately (could be a TB
about to arrive for all Squid knows).


 squid.conf:
 cache_dir aufs /squid/var/cache 20480 16 256
 cache_mem 1536 MB
 memory_pools off
 cache_swap_low 50
 cache_swap_high 55

These tell squid 50% of the cache allocated disk space MUST be empty at
all times. Erase content if more is used. The defaults for these are less
than 100% in order to leave some small buffer of space for use by
line-speed stuff still arriving while squid purged old objects to fit them.

The 90%/95% numbers were created back when large HDD were measured MB.

50%/55% with 20GB cache only makes sense if you have something greater
than 250Mbps of new cachable HTTP data flowing through this one Squid
instance. In which case I'd suggest a bigger cache.

(My estimate of the bandwidth is calculated from: % of cache needed free /
5 minute interval lag in purging.)


 refresh_pattern . 0 20% 1440


 filesystem is resizerfs with RAID-0. only 11GB used for the cache.

Used or available?

cache_dir...20480 = 20GB allocated for the cache.

With 11GB is