[squid-users] Strang Problem

2006-07-17 Thread Raj

Hi,

I am running Version 2.5.STABLE10. I have a strange problem with one
of the web sites. If I access the web site https://66.227.81.53/, it
doesn't work. But if I access the same web site with http instead of
https, http://66.227.81.53:443/ it works fine.

Below are the access logs:

1153199709.750  2 172.26.101.76 TCP_DENIED/407 1683 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199709.756  1 172.26.101.76 TCP_DENIED/407 1753 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199716.230   6473 172.26.101.76 TCP_MISS/000 420 CONNECT
66.227.81.53:443 auchoa FIRST_UP_PARENT/172.26.1.67 -
1153199716.249  0 172.26.101.76 TCP_DENIED/407 1683 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199716.252  1 172.26.101.76 TCP_DENIED/407 1753 CONNECT
66.227.81.53:443 - NONE/- text/html
1153199716.714461 172.26.101.76 TCP_MISS/000 387 CONNECT
66.227.81.53:443 auchoa FIRST_UP_PARENT/172.26.1.67 -
1153199741.052  0 172.26.101.76 TCP_DENIED/407 1707 GET
http://66.227.81.53:443/ - NONE/- text/html
1153199741.056  1 172.26.101.76 TCP_DENIED/407 1777 GET
http://66.227.81.53:443/ - NONE/- text/html
1153199741.865808 172.26.101.76 TCP_MISS/600 9 GET
http://66.227.81.53:443/ auchoa FIRST_UP_PARENT/172.26.1.67 -


I have the following ACL for port 443

acl SSL_ports port 443
http_access deny CONNECT !SSL_ports

I am not sure why it works if I use http but not https.

Thanks


Re: [squid-users] lots of TIMEOUT_DIRECT

2006-07-17 Thread Mathew Thomas
Hi ,

We are running Squid ver 2.5 STABLE5 on Red Hat.  Our set up consists
of all the departmental proxies(Novell & Squid) configured to use our
proxy (super proxy)  as parent and the departmental proxies can not go
out to  internet directly. There are three super proxies and are sibling
to each other. Since we turned on sibling, we are getting a lot of the
following message in the access.log and also looks like performance is
not very good. 

1) What is  the minimum configuration needed in the super proxies to
allow sibling each other and to act as a parent for the departmental
proxies? 
2) What is the cause of the following error message?


1153198439.920322 192.168.6.143 TCP_MISS/200 13351 GET
http://www.innovatoys.com/images/kablaster.jpg -
TIMEOUT_DIRECT/204.10.70.167 image/jpeg
1153198440.198501 192.168.6.143 TCP_MISS/404 688 GET
http://www.innovatoys.com/images/fire.jpg -
TIMEOUT_DIRECT/204.10.70.167
text/html
1153198440.409 95 192.168.6.143 TCP_MISS/200 4960 GET
http://www.theage.com.au/entertainment/planner/planner.html -
TIMEOUT_DIRECT/203.26.51.42 text/html
1153198440.499   3263 192.168.6.143 TCP_MISS/200 127451 GET
http://www.myspace.com/undefined - TIMEOUT_DIRECT/63.208.226.42
text/html
1153198440.587976 192.168.6.143 TCP_MISS/200 22807 GET
http://www.innovatoys.com/images/zero-G.jpg -
TIMEOUT_DIRECT/204.10.70.167 image/jpeg
1153198441.184980 192.168.6.143 TCP_MISS/200 10501 GET
http://www.innovatoys.com/images/micro.jpg -
TIMEOUT_DIRECT/204.10.70.167 image/jpeg
1153198441.408501 192.168.6.143 TCP_MISS/200 13386 GET
http://www.innovatoys.com/images/skyliner.jpg -
TIMEOUT_DIRECT/204.10.70.167 image/jpeg
1153198441.801   3586 192.168.6.143 TCP_MISS/200 39428 GET
http://netmode.vietnamnet.vn/dataimages/200607/original/images1037973_3.jpg
- 

Thanks



[squid-users] SQUID3: enable ssl requests between SQUID and backend servers in accelerator mode (reverse proxy)

2006-07-17 Thread gwaa
I try to have:
[HTTPS client: internet:443]
|
|
[NATfirewall]
|
[HTTPS:10443]
|
[SQUID3]
|
[HTTPS:443]
|
|
[SERVERS]

i have following lines in squid.conf:

http_access allow our_networks
http_access allow all
http_port 3128 vhost vport=80 protocol=http defaultsite=www.domain1.com
acl http proto http 
acl https proto https
acl port80 port 80
acl port443 port 443  
acl domain1_com dstdomain .domain1.com

https_port 192.168.2.2:10443 cert=file.crt key=file.key
defaultsite=www.domain1.com

cache_peer 192.168.2.2 parent 443 0 no-query name=domain1-ssl ssl 
sslcert=file.crt sslkey=file.key
cache_peer_access domain1-ssl allow domain1_com

http_access allow http port80 domain1_com  port443 https
url_rewrite_host_header off

but i don't have https pages  in my browser (i have http pages)
thanks for any help


[squid-users] Squid Slow Downloads problem--large files

2006-07-17 Thread adam.cheng
Hi, squid-user

In my test , Apache is much more faster than squid when there are about
30Mbps load.

brief result:

-->large file download testing (10M and 40M)
-->Current load: 15Mbps (squid service)
-->IOwait: 6%
-->test result:  (same box, same environment)
Apache: 60~70Mbytes/s
Squid:  700~900Kbytes/s  (HIT from squid cache)

Detail information listed below. 



->Hi, squid-users:
->
->I have met a slow download problem of squid , would anybody like to tell
me what’s
->the matter with my squid  Or is there any way to resolve this problem ?
->
->Squid info:
->
->[EMAIL PROTECTED] ~]# squid -v
->Squid Cache: Version 2.5.STABLE12
->configure options:  --prefix=/usr/local/squid --enable-epoll
--disable-ident-lookups
->--enable-async-io=160 --enable-storeio=ufs,aufs,diskd --enable-snmp
->--enable-cache-digests --enable-useragent-log --enable-referer-log
->--enable-kill-parent-hack --enable--internal-dns
->--
->
->
->squid.conf:
->
->http_port 80
->icp_port 0
->acl httpmp3 url_regex -i ^http://.*\.mp3$
->no_cache deny httpmp3
->acl httpwmv url_regex -i ^http://.*\.wmv$
->no_cache deny httpwmv
->acl httprm url_regex -i ^http://.*\.rm$
->no_cache deny httprm
-> cache_mem 1768 MB
-> cache_swap_low 70
-> cache_swap_high 80
->maximum_object_size 204800 KB
->minimum_object_size 0 KB
->maximum_object_size_in_memory 102400 KB
-> cache_replacement_policy lru
-> memory_replacement_policy lru
->cache_dir diskd /data/cache1 28000 16 256
->cache_dir diskd /data/cache2 28000 16 256
->logformat squid_custom_log %ts.%03tu %6tr %>a %Ss/%03Hs %%un %Sh/%h" "%{User-Agent}>h" "%{Cookie}>h"
->cache_access_log /data/proclog/log/squid/access.log squid_custom_log
->cache_log /data/proclog/log/squid/cache.log
->cache_store_log none
->pid_filename /var/run/squid.pid
-> hosts_file /etc/hosts
-> diskd_program /usr/local/squid/libexec/diskd
-> unlinkd_program /usr/local/squid/libexec/unlinkd
->
->
->refresh_pattern -i  ^http://player.toodou.com.*2073600   100%  2073600
->ignore-reload
->refresh_pattern -i  ^http://www.blogcn.com.*1440   50%  1440
->refresh_pattern -i  ^http://images.blogcn.com.*1440   50%  1440
->refresh_pattern -i  ^http://female.blogcn.com.*1440   50%  1440
->refresh_pattern -i  ^http://img.365ren.com.*   720   100%  720
->refresh_pattern -i  ^http://cfs1.365ren.com.*720   100%  720
->refresh_pattern -i  ^http://cafe-img.365ren.com.*720   100%  720
->refresh_pattern -i  ^http://cafe-cfs1.365ren.com.*720   100%  720
->refresh_pattern -i  ^http60   0%  60   ignore-reload
->collapsed_forwarding on
->refresh_stale_hit 0 minute
->request_timeout 30 seconds
-> persistent_request_timeout 3 seconds
-> pconn_timeout 60 seconds
->acl all src 0.0.0.0/0.0.0.0
->acl manager proto cache_object
->acl localhost src 127.0.0.1/255.255.255.255
->acl to_localhost dst 127.0.0.0/8
->acl SSL_ports port 443 563
->acl Safe_ports port 80  # http
->acl Safe_ports port 21  # ftp
->acl Safe_ports port 443 563 # https, snews
->acl Safe_ports port 70  # gopher
->acl Safe_ports port 210 # wais
->acl Safe_ports port 1025-65535  # unregistered ports
->acl Safe_ports port 280 # http-mgmt
->acl Safe_ports port 488 # gss-http
->acl Safe_ports port 591 # filemaker
->acl Safe_ports port 777 # multiling http
->acl CONNECT method CONNECT
->acl monitor src 192.168.1.0/255.255.255.0
->http_access allow manager
->http_access allow manager monitor
->http_access deny manager
->acl PURGE method PURGE
->http_access allow PURGE localhost
->http_access deny purge
->acl snmppublic snmp_community public
->snmp_access allow snmppublic localhost
->http_access deny !Safe_ports
->http_access deny CONNECT !SSL_ports
->http_access allow all
->http_reply_access allow all
-> cache_mgr [EMAIL PROTECTED]
-> cache_effective_user squid
-> cache_effective_group squid
->visible_hostname CHN-SH-3-341
->httpd_accel_host virtual
->httpd_accel_port 80
->httpd_accel_single_host off
->httpd_accel_with_proxy off
->httpd_accel_uses_host_header on
->dns_testnames original1.chinacache.com original2.chinacache.com
->  logfile_rotate 0
->
->cachemgr_passwd test4squid config
-> store_avg_object_size 20 KB
->client_db off
->header_access X-Cache-Lookup deny all
->snmp_port 3401
->acl snmppublic snmp_community public
-> client_persistent_connections off
-> server_persistent_connections off
->vary_ignore_expire on
->strip_query_terms off
->negative_ttl 0 minute
->dns_retransmit_interval 10 seconds
->store_dir_select_algorithm round-robin
->dns_timeout 2 minute
->negative_dns_ttl 1 minute
->connect_timeout 30 seconds
->read_timeout 15 minutes
->
->--
-
->
->Test information:  (all test was done to the same box, )
->
->Squid with 80 port:
->
->[EMAIL PROTECTED] ~]# wget 

Re: [squid-users] Reverse Proxy SSL 3.0

2006-07-17 Thread Henrik Nordstrom
mån 2006-07-17 klockan 16:06 -0400 skrev Brad Taylor:

> https_port 443 cert=/etc/squid/sbcert.pem
> key=/etc/squid/sbprivatekey.pem version=2
> 
> When I change to 3:
> https_port 443 cert=/etc/squid/sbcert.pem
> key=/etc/squid/sbprivatekey.pem version=3
> 
> I get a "page can not be displayed" in IE.

Probably IE sends a SSLv2 session setup with upgrade to SSLv3/TLS. If
you set the protocol to 3 then only SSLv3 session setups is accepted and
clients sending SSLv2 session setups will get rejected even if they
indicate they accept upgrade to SSLv3 or TLS.

Try disabling SSLv2 via the options= instead.. This keeps the
"automatic" session setup mode, and only restricts the end result..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Howto NOT log URLs in access.log

2006-07-17 Thread Henrik Nordstrom
mån 2006-07-17 klockan 15:11 -0500 skrev Michael Ellis:
> Hi,
> 
> I was wondering if anyone knows of a way to configure squid so that it does
> not write the URL to access.log. All I want to know is who was browsing the
> web from which computer and when (date, client ip, and authname). This is to
> comply with personal privacy and information policies and laws.

See the custom_log_format directive and the access_log directive.  (2.6)

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid/SquidGuard: info of user and category

2006-07-17 Thread Chris Robertson

Karsten Rothemund wrote:

On Mon, Jul 10, 2006 at 10:29:42AM +0200, Peter Albrecht wrote:
  

Hi Karsten,



I still do not get any info about the requesting user. The field is
  


Interesting question. I was about to say no. But then a last test
showed info about the user "photor" (it's my login on the local
machine here). But when I reloaded the site (google.de calssified by
squidGuard as porn ;-) ), the user-info disapeared (from the
access.log of squid):

1152732284.356535 172.16.0.2 TCP_MISS/403 2379 GET http://google.de/ ph=
otor DIRECT/127.0.0.1 text/html
1152732374.307376 172.16.0.2 TCP_MISS/403 2373 GET http://google.de/ ph=
otor DIRECT/127.0.0.1 text/html
1152732393.102342 172.16.0.2 TCP_MISS/403 2395 GET http://google.de/ ph=
otor DIRECT/127.0.0.1 text/html
1152732461.940338 172.16.0.2 TCP_MISS/403 2373 GET http://google.de/ - =
DIRECT/127.0.0.1 text/html
1152732471.052337 172.16.0.2 TCP_MISS/403 2377 GET http://www.google.de=
/ - DIRECT/127.0.0.1 text/html

I don't see any logic or systematics behind this (probably because of
my limited knowledge). The last line I retried to load a slightly
diferent URL to see, if this has to do with reloading the site - but
no. And I doubt this is a squid problem.

Still with problems

Karsten (aka Photor)

  

As per http://wiki.squid-cache.org/SquidFaq/SquidAcl...

...Squid does not wait for the lookup to complete unless the ACL rules 
require it.
So unless you have a rule requiring the ident information, it may or may 
not be provided.  See that section of the Wiki, and look for the bit 
about "How do I block specific users or groups from accessing my 
cache?".  That should help with reliably getting the ident information.


Chris



[squid-users] Howto NOT log URLs in access.log

2006-07-17 Thread Michael Ellis
Hi,

I was wondering if anyone knows of a way to configure squid so that it does
not write the URL to access.log. All I want to know is who was browsing the
web from which computer and when (date, client ip, and authname). This is to
comply with personal privacy and information policies and laws.

I suspect that one could edit the source and recompile, but this is a little
outside my comfort zone. If this is what is required, could someone provide
me with some direction for where to start?

Thanks,

Mike Ellis




[squid-users] Reverse Proxy SSL 3.0

2006-07-17 Thread Brad Taylor
I have squid (2.5 STABLE6) setup as a reverse proxy working fine with a
SSL certificate. I've been told we need to disable SSL 2.0 and only
allow 3.0. In squid.conf I have line:

https_port 443 cert=/etc/squid/sbcert.pem
key=/etc/squid/sbprivatekey.pem

and that line works and so does:

https_port 443 cert=/etc/squid/sbcert.pem
key=/etc/squid/sbprivatekey.pem version=2

When I change to 3:
https_port 443 cert=/etc/squid/sbcert.pem
key=/etc/squid/sbprivatekey.pem version=3

I get a "page can not be displayed" in IE.

I've not been able to find any help in the logs.

Anyone have any ideas or a fix? Thanks



Re: [squid-users] Excluding some clients from authentication REQUIRED acl

2006-07-17 Thread Chris Robertson

Geoff Varney wrote:

Hi,
I am trying to make Squid 2.6 work in the following setup:
  
I haven't had the time yet to upgrade to 2.6, so my advice may be...  
Unreliable.  You have been warned.

Main Site:
I have one master caching/authentication Squid 2.6 server

I have one DansGuardian (2.9.7.1) server with the above master Squid as its
parent

Remote Sites:
I have 3 remote Squid servers that each authenticate their local clients and
point to the above DG server as parent


I am passing on user and password from the remote Squids (no-query
login=*:password default).  This worked great when the main site had an
authentication Squid in front of DG (2.8) and the remote Squids used DG as
the parent, and the main site authentication Squid did the same.  In this
setup all sites were really the same.

Now with DG 2.9.7.1 I have tried to eliminate the main site authentication
Squid as DG will now pass through to Squid to authenticate.  This works
great at the main site.  However, when I set a remote Squid to use DG as its
parent there is now an attempt to authenticate AGAIN to the main site Squid
which is the parent to DG.

Philip Allison (DG developer) suggested using ACLs to exclude these remote
requests from being authenticated by the main Squid.  
Hmmm... By the time the requests reach the "main" Squid, they have all 
passed through DG, and all appear to be from the same IP.  Unless, of 
course, the follow-XFF patch was integrated in to Squid2.6...  If that 
is the case (and you compiled with enable, you should be able to insert 
an http_access rule allowing the subnet(s) access before denying access 
to non-authenticated hosts.  Something like...


# The following lines require XFF
acl DansGuardian src 
follow_x_forwarded_for allow DansGuardian
acl_uses_indirect_client on
# End XFF requirement
acl no_auth src 
acl passwords_required proxy_auth REQUIRED
http_access allow no_auth
http_access allow passwords_required
http_access deny all

I have been working on
this but can't seem to get it to work.  I can get things to work if I allow
the remote subnet's IPs to have http_access, but that effectively skips DG
filtering.  I had hoped that something like:

acl no_auth src 
proxy_auth REQUIRED !no_auth
  
This would be trying to use an ACL within another ACL.  Perhaps that's 
possible in 2.6...

or something like that would skip auth on the main Squid.  But that doesn't
work, maybe the syntax is invalid for proxy_auth REQUIRED.

I know I don't have a complete understanding of acls (and much more!) and
know they are very powerful if you get them right and put them in the right
order, etc.
  
Check out the Wiki section on ACLs 
(http://wiki.squid-cache.org/SquidFaq/SquidAcl).  There's a lot of good 
information there.

I'm stuck in getting the remote Squid requests to go to the main Squid and
then go back to DG to filter, then out through Squid without trying to
authenticate again.  How I do make Squid ignore authenticating some requests
(by IP acl or something?) but still filter with DG?  Can it be done?  If
not, I'll just go back to Squid Auth->DG->Squid Cache like before.

Thanks,
Geoff
  
An other option would be to create a login/password combination on the 
"main" Squid server, and have the "remote" Squid servers use that (e.g. 
the remote Squid servers would define their parent cache using 
"login=user:password").  *shrug*


Chris


RE: [squid-users] Odd caching problem

2006-07-17 Thread Gary W. Smith
Any ideas on this.  I have looked through some of the FAQ's and haven't
found what I am looking for.  If it's covered somewhere in the doc's/faq
can someone point me to it?  

Thanks,

Gary Wayne Smith

> -Original Message-
> From: Gary W. Smith [mailto:[EMAIL PROTECTED]
> Sent: Thursday, July 13, 2006 2:32 PM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Odd caching problem
> 
> Hello,
> 
> I am using squid 2.0 that comes with RedHat EL 4.  We have set it up
to
> become a transparent proxy for our network (via iptables).  It seems
to
> work great for most sites but recently I have found a couple sites
that
> I cannot access through the proxy server.  One of those sites is
> www.newegg.com.  We use the stock configuration file with the
following
> changes:
> 
> httpd_accel_host virtual
> httpd_accel_port 80
> httpd_accel_with_proxy on
> httpd_accel_uses_host_header on
> 
> acl PURGE method PURGE
> acl localhost src 127.0.0.1
> http_access allow PURGE localhost
> http_access deny PURGE
> 
> acl CGI url_regex .cgi$
> acl PHP url_regex .php$
> acl ASP url_regex .asp$
> acl ASPNET url_regex .aspx$
> no_cache deny CGI
> no_cache deny PHP
> no_cache deny ASP
> no_cache deny ASPNET
> 
> We assumes that it had something to do with the dynamic ASP being
cached
> so we added it to the list of no_cache.  But it doesn't seem to make a
> difference.  When the users go to the page we see the entry in the log
> file for squid but in the browser it just sits there.
> 
> Here is a log example:
> 
> 1152760837.202  37190 10.0.16.85 TCP_MISS/000 0 GET
> http://www.newegg.com/ - DIRECT/204.14.213.185 -
> 
> But if I remove the transparent proxy setting from iptables and go
> direct it works.  If later I re-enable the setting it will continue to
> work for a little while (not sure how long, haven't timed it) but then
> it will eventually fail with the same TCP_MISS entries in the log
file.
> 
> Any ideas?
> 
> Gary Smith


[squid-users] over time squid slows down

2006-07-17 Thread Ben Collver
Good day,

I am running a no-cache transparent proxy using squid 2.5.13 and
ipfilter on NetBSD/i386 3.0.

About every 2 or 3 weeks, it slows down and web browsing grinds to a
halt.  Restarting the squid daemon fixes the issue for another 2 weeks.

While this is happening, top reports that the load is low, and squid is
not using much memory.

My squid configuration is at the end of this message.  Can anyone give
advice on how to trouble-shoot this or ideas on what I may be doing
wrong?

Thank you,

Ben

http_port 127.0.0.1:3128
icp_port 0
udp_incoming_address XXX.XXX.XXX.XXX
udp_outgoing_address 255.255.255.255
hierarchy_stoplist cgi-bin ?
acl never-cache src 0.0.0.0/0.0.0.0
no_cache deny never-cache
cache_dir null /tmp
cache_access_log /LOGDIR/squid/access_log
cache_log /LOGDIR/squid/cache_log
cache_store_log none
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
shutdown_lifetime 1 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
acl our_networks src XXX.XXX.XXX.XXX/XX
http_access allow our_networks
http_access deny all
http_reply_access allow all
icp_access deny all
tcp_outgoing_address XXX.XXX.XXX.XXX
cache_mgr root
mail_program /usr/bin/mailx
cache_effective_user squid
httpd_accel_host virtual
httpd_accel_port 0
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
logfile_rotate 60
coredump_dir /var/squid/cache


[squid-users] Re: Performance problems

2006-07-17 Thread Joost de Heer
Forgot one additional piece of information: Squid version used is 2.5.13.
But we've been having these problems with 2.5.7, 2.5.10 and 2.5.12 too.

Joost de Heer wrote:
> Hello,
>
> For a while, we've been having performance problems on one of our proxies.
> So far it looks like the machine is responding horridly when memory is
> freed.
>
> Here's some sample output from vmstat:
>
> 20060717-12  2  3  0  18332 190544 10132201    1 0 3
>  0 0  2  2  1  1
> 20060717-120100  2  0  0  20744 191040 10143081    1 0 3
>  0 0  2  2  1  1
> 20060717-120200  2  0  0  20620 191576 1013444    11 0 3
>  0 0  2  2  1  1
> 20060717-120300  2  0  0  20828 192012 101281611 0 3
>  0 0  2  2  1  1
> 20060717-120400  1  0  0  29832 192392 100286811 0 3
>  0 0  2  2  1  1
> 20060717-120500  1  0      0  58108 192524 97176011 0 3
> 0 0  2  2  1  1
> 20060717-120600  2  0  0  69172 192784 96505611 0 3
> 0 0  2  2  1  1
> 20060717-120700  2  0  0  45644 193200 98818411 0 3
> 0 0  2  2  1  1
> 20060717-120800  2  0  0  24668 193604 100877611 0 3
>  0 0  2  2  1  1
> 20060717-120900  2  0  0  21576 194048 101143611 0 3
>  0 0  2  2  1  1
> 20060717-121001  4  0  0  18056 194400 101090411 0 3
>  0 0  2  2  1  1
> 20060717-121100  2  0  0  18652 194904 101350411 0 3
>  0 0  2  2  1  1
>
> Between 12:04 and 12:07, the machine was responding very poorly.
>
> Output of 'free':
>
>  total   used   free sharedbuffers cached
> Mem:   20554482034576  20872  0 2027561002868
> -/+ buffers/cache: 8289521226496
> Swap:  8388600  08388600
>
> Specs of the machine:
>
> Dual processor Intel(R) Xeon(TM) CPU 3.20GHz, 2 GB memory, machine has 3
> HD's: 2x72.8GB mirror for OS/software/logs and 1x72.8G single disk for
> cache.
>
> OS: Linux kslh086 2.4.21-37.ELsmp #1 SMP Wed Sep 7 13:28:55 EDT 2005 i686
> i686 i386 GNU/Linux (upgrade to 2.6 is unfortunately not possible)
>
> Disks are all ext3.
>
> Relevant squid.conf lines:
>
> cache_replacement_policy heap GDSF
> memory_replacement_policy heap GDSF
> memory_pools off
> cache_swap_low 90
> cache_swap_high 95
> maximum_object_size 64 KB
> maximum_object_size_in_memory 8 KB
>
> Apart from squid, Apache httpd 2.2.2 and BIND 9.3.2 (as a caching DNS
> server) are running on this machine.
>
> Relevant Apache config:
>
> MinSpareServers 5
> MaxSpareServers 5
> StartServers5
> MaxRequestsPerChild 0
> MaxClients  5
>
> I've already minimised the cache (it's only 1G large now) to see if the
> problem was too much disk access, but no luck.
>
> Normal cache usage is about 300-350 req/s, throughput is about 2.5MB/s,
> and usually there are about 2000-2500 fd's open (proxy is configured to
> run with 8192 available fd's, we can't lower this as the peak usage seems
> to be about 6000 fds)
>
> Anyone has any ideas what might cause this? Directions to search for?
>
> Joost
>
>




RE: [squid-users] Zero Sized Reply

2006-07-17 Thread Oscar Rylin
Adrian's on the right track, but  it's not only memory limits, but other
kinds of limits as well - and not always related to PHP.
PHP's max_execution_time comes to mind, as does the HTTP timeout in Apache.

-- rylin

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: den 17 juli 2006 15:01
To: Guido Serassio
Cc: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: Re: [squid-users] Zero Sized Reply

actually, I think I know this one. I think you'll find you're hitting some
php configured memory limit and php is just exiting. Squid sees this as a
reply with no data - and hence gives the error.

What you need to do is edit php.ini, turn on logging, log to a file (say
/tmp/log). Restart the web server and watch the logfile.
If it throws an out of memory error then please tweak the memory settings
(max memory size, max upload size, etc) in PHP.

Its not a Squid problem - if its the problem I'm thinking about. :)



Adrian


On Mon, Jul 17, 2006, Guido Serassio wrote:
> Hi,
> 
> At 14.43 17/07/2006, [EMAIL PROTECTED] wrote:
> 
> 
> 
> >Hi All,
> >
> >We are using qmail for our mta with horde as the webmail interface. 
> >Our mail server is capable of both POP3 and webbase. Using Putlook as 
> >POP3 client I can send and receive 5MB of file size  or more  while 
> >If im using webmail I cannot even attached more than 400 Kb and Im 
> >having an error message from the Squid proxy as "ZERO SIZED REPLY" 
> >what does it mean and how can I solve it.
> >
> >
> >Thank you very much,
> >
> >Wennie
> 
> To know the version of your Squid could be very helpful for anyone 
> that would help you ... :-)
> 
> Regards
> 
> Guido
> 
> 
> 
> -
> 
> Guido Serassio
> Acme Consulting S.r.l. - Microsoft Certified Partner
> Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
> Tel. : +39.011.9530135  Fax. : +39.011.9781115
> Email: [EMAIL PROTECTED]
> WWW: http://www.acmeconsulting.it/



Re: [squid-users] 2.6S1 WCCP2 problems

2006-07-17 Thread Adrian Chadd
On Mon, Jul 17, 2006, Shoebottom, Bryan wrote:
> Adrian,
> 
> The interest is 100%.  If I can't get wccpv2 to work in 2.6, i will stay with 
> 2.5.  As for the debug, i will post what is in the cache.log file, i also got 
> 5 core file for every time squid tried to start:

You need to fix this first before we try to fix WCCP2.

This error sounds like the diskd stuff isn't setup right - double-check your 
SYSV shared
memory and message queue configuration and get squid-2.6 stable.



Adrian

> 
> FATAL: msgget failed
> Squid Cache (Version 2.6.STABLE1): Terminated abnormally.
> CPU Usage: 0.008 seconds = 0.004 user + 0.004 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> FATAL: msgget failed
> Squid Cache (Version 2.6.STABLE1): Terminated abnormally.
> CPU Usage: 0.004 seconds = 0.004 user + 0.000 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> FATAL: msgget failed
> Squid Cache (Version 2.6.STABLE1): Terminated abnormally.
> CPU Usage: 0.008 seconds = 0.004 user + 0.004 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> FATAL: msgget failed
> Squid Cache (Version 2.6.STABLE1): Terminated abnormally.
> CPU Usage: 0.004 seconds = 0.004 user + 0.000 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> FATAL: msgget failed
> Squid Cache (Version 2.6.STABLE1): Terminated abnormally.
> CPU Usage: 0.004 seconds = 0.004 user + 0.000 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> 
> Thanks,
> 
> Bryan Shoebottom CCNA
> Network/UNIX Administrator
> Network Services & Computer Operations
> Fanshawe College
> 
> 
> 
> -Original Message-
> From: Adrian Chadd [mailto:[EMAIL PROTECTED]
> Sent: Mon 7/17/2006 8:50 AM
> To: Shoebottom, Bryan
> Cc: Jeremy Hall; squid-users@squid-cache.org
> Subject: Re: [squid-users] 2.6S1 WCCP2 problems
>  
> On Mon, Jul 17, 2006, Shoebottom, Bryan wrote:
> > I'm not going to say it's not a cisco problem because they seem to change 
> > their code with every release, but i only changed the cache configuration 
> > to use 2.6S1 and not 2.5S12.  I will try the debug (all on our development 
> > network) and send in the results.  Thanks for the suggestions.
> 
> Hopefully the logs will give us a hint as to why WCCP isn't working.
> 
> How much interest is there in getting Squid-2.6 and WCCPv2 working
> well?
> 
> 
> 
> 
> Adrian
> 
> 


[squid-users] 转发: Squid Slow Downloads problem

2006-07-17 Thread adam.cheng
Hi, squid-users:

I have met a slow download problem of squid , would anybody like to tell me 
what’s the matter with my squid  Or is there any way to resolve this problem ?  
  

Squid info:

[EMAIL PROTECTED] ~]# squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --prefix=/usr/local/squid --enable-epoll 
--disable-ident-lookups --enable-async-io=160 --enable-storeio=ufs,aufs,diskd 
--enable-snmp --enable-cache-digests --enable-useragent-log 
--enable-referer-log --enable-kill-parent-hack --enable--internal-dns
--


squid.conf:

http_port 80
icp_port 0
acl httpmp3 url_regex -i ^http://.*\.mp3$
no_cache deny httpmp3
acl httpwmv url_regex -i ^http://.*\.wmv$
no_cache deny httpwmv
acl httprm url_regex -i ^http://.*\.rm$
no_cache deny httprm
 cache_mem 1768 MB 
 cache_swap_low 70
 cache_swap_high 80
maximum_object_size 204800 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 102400 KB
 cache_replacement_policy lru
 memory_replacement_policy lru
cache_dir diskd /data/cache1 28000 16 256
cache_dir diskd /data/cache2 28000 16 256
logformat squid_custom_log %ts.%03tu %6tr %>a %Ss/%03Hs %h" "%{User-Agent}>h" "%{Cookie}>h"
cache_access_log /data/proclog/log/squid/access.log squid_custom_log
cache_log /data/proclog/log/squid/cache.log
cache_store_log none
pid_filename /var/run/squid.pid
 hosts_file /etc/hosts
 diskd_program /usr/local/squid/libexec/diskd
 unlinkd_program /usr/local/squid/libexec/unlinkd
 
 
refresh_pattern -i  ^http://player.toodou.com.*2073600   100%  2073600   
ignore-reload
refresh_pattern -i  ^http://www.blogcn.com.*1440   50%  1440
refresh_pattern -i  ^http://images.blogcn.com.*1440   50%  1440   
refresh_pattern -i  ^http://female.blogcn.com.*1440   50%  1440   
refresh_pattern -i  ^http://img.365ren.com.*   720   100%  720   
refresh_pattern -i  ^http://cfs1.365ren.com.*720   100%  720   
refresh_pattern -i  ^http://cafe-img.365ren.com.*720   100%  720   
refresh_pattern -i  ^http://cafe-cfs1.365ren.com.*720   100%  720  
refresh_pattern -i  ^http60   0%  60   ignore-reload
collapsed_forwarding on
refresh_stale_hit 0 minute
request_timeout 30 seconds
 persistent_request_timeout 3 seconds
 pconn_timeout 60 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl monitor src 192.168.1.0/255.255.255.0
http_access allow manager  
http_access allow manager monitor
http_access deny manager 
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny purge
acl snmppublic snmp_community public
snmp_access allow snmppublic localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_reply_access allow all
 cache_mgr [EMAIL PROTECTED]
 cache_effective_user squid 
 cache_effective_group squid
visible_hostname CHN-SH-3-341 
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_single_host off
httpd_accel_with_proxy off
httpd_accel_uses_host_header on
dns_testnames original1.chinacache.com original2.chinacache.com
  logfile_rotate 0
 
cachemgr_passwd test4squid config
 store_avg_object_size 20 KB
client_db off
header_access X-Cache-Lookup deny all
snmp_port 3401
acl snmppublic snmp_community public
 client_persistent_connections off
 server_persistent_connections off
vary_ignore_expire on
strip_query_terms off
negative_ttl 0 minute
dns_retransmit_interval 10 seconds
store_dir_select_algorithm round-robin
dns_timeout 2 minute
negative_dns_ttl 1 minute
connect_timeout 30 seconds
read_timeout 15 minutes

---

Test information:  (all test was done to the same box, )

Squid with 80 port:

[EMAIL PROTECTED] ~]# wget -SO test  
http://player.toodou.com/flv/001/126/216/1126216.flv
--18:56:54--  http://player.toodou.com/flv/001/126/216/1126216.flv
   => `test'
Resolving player.toodou.com... 192.168.1.131
Connecting to player.toodou.com|192.168.1.131|:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.0 200 OK
  Date: Fri, 14 Jul 2006 05:45:42 GMT
  Content-Length: 42254194
  Content-Type: application/octet-stream
  ETag: "-1897917552"
  Last-Modified: Wed, 12 Jul 2006 11:34:09 GMT
  Server: Microsoft-IIS/6.0
  X-Cache: HIT from origin-player.toodou.com
  Via: 1.0 CHN-SH-4-912 (NetCache NetApp/5.5R5D8), 1.1 CHN-SH-3-911 (NetCache 
NetApp/6.0.3)
  Age: 39548
  X-Cache: HIT from CHN-SH-3-341
  Connect

Re: [squid-users] Zero Sized Reply

2006-07-17 Thread wlagmay
Sorry for that, its 2.5 stable13

Thanks

Wennie

Quoting Guido Serassio <[EMAIL PROTECTED]>:

> Hi,
>
> At 14.43 17/07/2006, [EMAIL PROTECTED] wrote:
>
>
>
> >Hi All,
> >
> >We are using qmail for our mta with horde as the webmail interface. Our mail
> >server is capable of both POP3 and webbase. Using Putlook as POP3 client I
> can
> >send and receive 5MB of file size  or more  while If im using webmail I
> cannot
> >even attached more than 400 Kb and Im having an error message from the Squid
> >proxy as "ZERO SIZED REPLY" what does it mean and how can I solve it.
> >
> >
> >Thank you very much,
> >
> >Wennie
>
> To know the version of your Squid could be very helpful for anyone
> that would help you ... :-)
>
> Regards
>
> Guido
>
>
>
> -
> 
> Guido Serassio
> Acme Consulting S.r.l. - Microsoft Certified Partner
> Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
> Tel. : +39.011.9530135  Fax. : +39.011.9781115
> Email: [EMAIL PROTECTED]
> WWW: http://www.acmeconsulting.it/
>
>






Re: [squid-users] Zero Sized Reply

2006-07-17 Thread Adrian Chadd
actually, I think I know this one. I think you'll find you're hitting
some php configured memory limit and php is just exiting. Squid sees
this as a reply with no data - and hence gives the error.

What you need to do is edit php.ini, turn on logging, log to a file
(say /tmp/log). Restart the web server and watch the logfile.
If it throws an out of memory error then please tweak the memory
settings (max memory size, max upload size, etc) in PHP.

Its not a Squid problem - if its the problem I'm thinking about. :)



Adrian


On Mon, Jul 17, 2006, Guido Serassio wrote:
> Hi,
> 
> At 14.43 17/07/2006, [EMAIL PROTECTED] wrote:
> 
> 
> 
> >Hi All,
> >
> >We are using qmail for our mta with horde as the webmail interface. Our 
> >mail
> >server is capable of both POP3 and webbase. Using Putlook as POP3 client I 
> >can
> >send and receive 5MB of file size  or more  while If im using webmail I 
> >cannot
> >even attached more than 400 Kb and Im having an error message from the 
> >Squid
> >proxy as "ZERO SIZED REPLY" what does it mean and how can I solve it.
> >
> >
> >Thank you very much,
> >
> >Wennie
> 
> To know the version of your Squid could be very helpful for anyone 
> that would help you ... :-)
> 
> Regards
> 
> Guido
> 
> 
> 
> -
> 
> Guido Serassio
> Acme Consulting S.r.l. - Microsoft Certified Partner
> Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
> Tel. : +39.011.9530135  Fax. : +39.011.9781115
> Email: [EMAIL PROTECTED]
> WWW: http://www.acmeconsulting.it/


Re: [squid-users] Zero Sized Reply

2006-07-17 Thread Guido Serassio

Hi,

At 14.43 17/07/2006, [EMAIL PROTECTED] wrote:




Hi All,

We are using qmail for our mta with horde as the webmail interface. Our mail
server is capable of both POP3 and webbase. Using Putlook as POP3 client I can
send and receive 5MB of file size  or more  while If im using webmail I cannot
even attached more than 400 Kb and Im having an error message from the Squid
proxy as "ZERO SIZED REPLY" what does it mean and how can I solve it.


Thank you very much,

Wennie


To know the version of your Squid could be very helpful for anyone 
that would help you ... :-)


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: [squid-users] 2.6S1 WCCP2 problems

2006-07-17 Thread Adrian Chadd
On Mon, Jul 17, 2006, Shoebottom, Bryan wrote:
> I'm not going to say it's not a cisco problem because they seem to change 
> their code with every release, but i only changed the cache configuration to 
> use 2.6S1 and not 2.5S12.  I will try the debug (all on our development 
> network) and send in the results.  Thanks for the suggestions.

Hopefully the logs will give us a hint as to why WCCP isn't working.

How much interest is there in getting Squid-2.6 and WCCPv2 working
well?




Adrian



[squid-users] Zero Sized Reply

2006-07-17 Thread wlagmay


Hi All,

We are using qmail for our mta with horde as the webmail interface. Our mail
server is capable of both POP3 and webbase. Using Putlook as POP3 client I can
send and receive 5MB of file size  or more  while If im using webmail I cannot
even attached more than 400 Kb and Im having an error message from the Squid
proxy as "ZERO SIZED REPLY" what does it mean and how can I solve it.


Thank you very much,

Wennie



RE: [squid-users] Putting high mem objects on cache. [signed]

2006-07-17 Thread Ben Hathaway
All,

I asked this question a while back. It seems that it's not possible.
My suggested solution (although I haven't tried it yet) is to run an actual
webserver on the machine and download the known-large-files (virus updates,
etc...) as a cron job every day at 4am. Then we should be able to setup
redirection rules in squid to point to our webserver rather than the live
internet server. This is a little like re-inventing the wheel, but should
work ok. It's on my list of things to do but way down near the bottom at the
moment.

Let me know if anyone comes up with a better solution.

Regards,

Ben Hathaway
Software Developer
http://www.spidersat.net
 Spidersat Logo 

-Original Message-
From: Rajendra Adhikari [c] [mailto:[EMAIL PROTECTED] 
Sent: 17 July 2006 12:46
To: squid-users@squid-cache.org
Subject: [squid-users] Putting high mem objects on cache. [signed]

Hi,
I have set maximum_object_size to 4MB. But without increasing this 
value, I would like to put some objects greater than 4mb on cache, like, 
msn installation file.
How do I put it on cache explicitily?  If it can be done, what would be 
the best way to automate this task? Please give me an idea if anyone has 
done this.

thanks in advance,
Rajendra.



--
- [ SECURITY NOTICE ] -
To: [EMAIL PROTECTED]
For your security, [EMAIL PROTECTED]
digitally signed this message on 17 July 2006 at 09:45:45 UTC.
Verify this digital signature at http://www.ciphire.com/verify.
 [ CIPHIRE DIGITAL SIGNATURE ] 
Q2lwaGlyZSBTaWcuAjhzcXVpZC11c2Vyc0BzcXVpZC1jYWNoZS5vcmcAcmFqZW5k
cmFAc3ViaXN1Lm5ldC5ucABlbWFpbCBib2R5AB4BAAB8AHwBSVy7RB4B
AABAAgACAAIAAgAge41wR4L+bXcWdThKam3FEHwmE/qn1pYTspEfujVuk+0BAHcW
34bSvF8RoB15amIjv339V+ZaGrEv2mG92v+dvY8RXvn4bPVNeO8r3tu+4+4rwghd
tuiXMrd2Hvd1R0GWnoSPXvQgU2lnRW5k
-- [ END DIGITAL SIGNATURE ] --





[squid-users] [EMAIL PROTECTED]: Re: [squid-users] 2.6S1 WCCP2 problems]

2006-07-17 Thread Adrian Chadd

oops.

- Forwarded message from Adrian Chadd <[EMAIL PROTECTED]> -

Date: Mon, 17 Jul 2006 20:02:38 +0800
From: Adrian Chadd <[EMAIL PROTECTED]>
To: "Shoebottom, Bryan" <[EMAIL PROTECTED]>
Cc: squid-dev@squid-cache.org
Subject: Re: [squid-users] 2.6S1 WCCP2 problems
User-Agent: Mutt/1.5.9i

On Mon, Jul 17, 2006, Shoebottom, Bryan wrote:
> Hey,
> 
> It's a 6500 with 12.1(26)E code on it.  It works with 2.5 stable code with 
> the WCCP2 patch applied.

If you're not afraid of a little risk, try this:

I think these will turn on wccp packet logging:
debug ip wccp packet
debug ip wccp event

make sure "no logging console" is on.

Then, on the squid side:

debug_options 80,99

.. and put the resultant logs into a bug submitted via bugzilla.




Adrian


- End forwarded message -


Re: [squid-users] Parent cache question

2006-07-17 Thread Dwayne Hottinger
The Netfilter people could help quite well with that question.What kind of
firewall is currently deployed?
Quoting Tim Bates <[EMAIL PROTECTED]>:

> I dont think that's a viable solution for me (iptables isnt being used
> here because I have no need, and I'd like to leave it that way), but how
> would I do that anyway?? Did you mean inbound redirect, and bypass the
> local proxy for those that need to? Or did you mean outbound where it's
> all going to appear to be from the squid box?
>
> Tim
>
> Dwayne Hottinger wrote:
> > You could do a redirect at your firewall if you use iptables (netfilter) it
> > should be quite easy.
> >
> > ddh
> >
> >
> > Quoting Tim Bates <[EMAIL PROTECTED]>:
> >
> >
> >> Hi.
> >>
> >> I have a situation where I have to get squid to use a particular parent
> >> proxy that requires authentication... Now, I've done this part, but some
> >> users don't actually have their accounts yet on the new parent, so they
> >> can't use it.
> >>
> >> What I'd like to do is have some specific users (based on IP address)
> >> get directed to the old parent proxy, while the rest go to the new one.
> >> Is it possible to create rules as to which parent is used?
> >>
> >> Tim B
> >>
> >> **
> >> This message is intended for the addressee named and may contain
> >> privileged information or confidential information or both. If you
> >> are not the intended recipient please delete it and notify the sender.
> >> **
> >>
> >>
> >
> >
> > --
> > Dwayne Hottinger
> > Network Administrator
> > Harrisonburg City Public Schools
> >
> >
>
> **
> This message is intended for the addressee named and may contain
> privileged information or confidential information or both. If you
> are not the intended recipient please delete it and notify the sender.
> **
>


--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools


AW: [squid-users] Performance problems

2006-07-17 Thread Stolle, Martin
Hello,

try setting 

client_persistent_connections off

This will dramatically reduce the number of open fds and reduce the amount of 
processor time of squid.

Greetings,
 
Martin Stolle
ekom21
 

-Ursprüngliche Nachricht-
Von: Joost de Heer [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 17. Juli 2006 12:34
An: squid-users@squid-cache.org
Betreff: [squid-users] Performance problems

Hello,

For a while, we've been having performance problems on one of our proxies.
So far it looks like the machine is responding horridly when memory is
freed.

Here's some sample output from vmstat:

20060717-12  2  3  0  18332 190544 101322011 0 3  
 0 0  2  2  1  1
20060717-120100  2  0  0  20744 191040 101430811 0 3  
 0 0  2  2  1  1
20060717-120200  2  0  0  20620 191576 101344411 0 3  
 0 0  2  2  1  1
20060717-120300  2  0  0  20828 192012 101281611 0 3  
 0 0  2  2  1  1
20060717-120400  1  0  0  29832 192392 100286811 0 3  
 0 0  2  2  1  1
20060717-120500  1  0  0  58108 192524 97176011 0 3   
0 0  2  2  1  1
20060717-120600  2  0  0  69172 192784 96505611 0 3   
0 0  2  2  1  1
20060717-120700  2  0  0  45644 193200 98818411 0 3   
0 0  2  2  1  1
20060717-120800  2  0  0  24668 193604 100877611 0 3  
 0 0  2  2  1  1
20060717-120900  2  0  0  21576 194048 101143611 0 3  
 0 0  2  2  1  1
20060717-121001  4  0  0  18056 194400 101090411 0 3  
 0 0  2  2  1  1
20060717-121100  2  0  0  18652 194904 101350411 0 3  
 0 0  2  2  1  1

Between 12:04 and 12:07, the machine was responding very poorly.

Output of 'free':

 total   used   free sharedbuffers cached
Mem:   20554482034576  20872  0 2027561002868
-/+ buffers/cache: 8289521226496
Swap:  8388600  08388600

Specs of the machine:

Dual processor Intel(R) Xeon(TM) CPU 3.20GHz, 2 GB memory, machine has 3
HD's: 2x72.8GB mirror for OS/software/logs and 1x72.8G single disk for
cache.

OS: Linux kslh086 2.4.21-37.ELsmp #1 SMP Wed Sep 7 13:28:55 EDT 2005 i686
i686 i386 GNU/Linux (upgrade to 2.6 is unfortunately not possible)

Disks are all ext3.

Relevant squid.conf lines:

cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF
memory_pools off
cache_swap_low 90
cache_swap_high 95
maximum_object_size 64 KB
maximum_object_size_in_memory 8 KB

Apart from squid, Apache httpd 2.2.2 and BIND 9.3.2 (as a caching DNS
server) are running on this machine.

Relevant Apache config:

MinSpareServers 5
MaxSpareServers 5
StartServers5
MaxRequestsPerChild 0
MaxClients  5

I've already minimised the cache (it's only 1G large now) to see if the
problem was too much disk access, but no luck.

Normal cache usage is about 300-350 req/s, throughput is about 2.5MB/s,
and usually there are about 2000-2500 fd's open (proxy is configured to
run with 8192 available fd's, we can't lower this as the peak usage seems
to be about 6000 fds)

Anyone has any ideas what might cause this? Directions to search for?

Joost




[squid-users] Performance problems

2006-07-17 Thread Joost de Heer
Hello,

For a while, we've been having performance problems on one of our proxies.
So far it looks like the machine is responding horridly when memory is
freed.

Here's some sample output from vmstat:

20060717-12  2  3  0  18332 190544 101322011 0 3  
 0 0  2  2  1  1
20060717-120100  2  0  0  20744 191040 101430811 0 3  
 0 0  2  2  1  1
20060717-120200  2  0  0  20620 191576 101344411 0 3  
 0 0  2  2  1  1
20060717-120300  2  0  0  20828 192012 101281611 0 3  
 0 0  2  2  1  1
20060717-120400  1  0  0  29832 192392 100286811 0 3  
 0 0  2  2  1  1
20060717-120500  1  0  0  58108 192524 97176011 0 3   
0 0  2  2  1  1
20060717-120600  2  0  0  69172 192784 96505611 0 3   
0 0  2  2  1  1
20060717-120700  2  0  0  45644 193200 98818411 0 3   
0 0  2  2  1  1
20060717-120800  2  0  0  24668 193604 100877611 0 3  
 0 0  2  2  1  1
20060717-120900  2  0  0  21576 194048 101143611 0 3  
 0 0  2  2  1  1
20060717-121001  4  0  0  18056 194400 101090411 0 3  
 0 0  2  2  1  1
20060717-121100  2  0  0  18652 194904 101350411 0 3  
 0 0  2  2  1  1

Between 12:04 and 12:07, the machine was responding very poorly.

Output of 'free':

 total   used   free sharedbuffers cached
Mem:   20554482034576  20872  0 2027561002868
-/+ buffers/cache: 8289521226496
Swap:  8388600  08388600

Specs of the machine:

Dual processor Intel(R) Xeon(TM) CPU 3.20GHz, 2 GB memory, machine has 3
HD's: 2x72.8GB mirror for OS/software/logs and 1x72.8G single disk for
cache.

OS: Linux kslh086 2.4.21-37.ELsmp #1 SMP Wed Sep 7 13:28:55 EDT 2005 i686
i686 i386 GNU/Linux (upgrade to 2.6 is unfortunately not possible)

Disks are all ext3.

Relevant squid.conf lines:

cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF
memory_pools off
cache_swap_low 90
cache_swap_high 95
maximum_object_size 64 KB
maximum_object_size_in_memory 8 KB

Apart from squid, Apache httpd 2.2.2 and BIND 9.3.2 (as a caching DNS
server) are running on this machine.

Relevant Apache config:

MinSpareServers 5
MaxSpareServers 5
StartServers5
MaxRequestsPerChild 0
MaxClients  5

I've already minimised the cache (it's only 1G large now) to see if the
problem was too much disk access, but no luck.

Normal cache usage is about 300-350 req/s, throughput is about 2.5MB/s,
and usually there are about 2000-2500 fd's open (proxy is configured to
run with 8192 available fd's, we can't lower this as the peak usage seems
to be about 6000 fds)

Anyone has any ideas what might cause this? Directions to search for?

Joost



[squid-users] Putting high mem objects on cache. [signed]

2006-07-17 Thread Rajendra Adhikari \[c\]

Hi,
I have set maximum_object_size to 4MB. But without increasing this 
value, I would like to put some objects greater than 4mb on cache, like, 
msn installation file.
How do I put it on cache explicitily?  If it can be done, what would be 
the best way to automate this task? Please give me an idea if anyone has 
done this.


thanks in advance,
Rajendra.



--
- [ SECURITY NOTICE ] -
To: [EMAIL PROTECTED]
For your security, [EMAIL PROTECTED]
digitally signed this message on 17 July 2006 at 09:45:45 UTC.
Verify this digital signature at http://www.ciphire.com/verify.
 [ CIPHIRE DIGITAL SIGNATURE ] 
Q2lwaGlyZSBTaWcuAjhzcXVpZC11c2Vyc0BzcXVpZC1jYWNoZS5vcmcAcmFqZW5k
cmFAc3ViaXN1Lm5ldC5ucABlbWFpbCBib2R5AB4BAAB8AHwBSVy7RB4B
AABAAgACAAIAAgAge41wR4L+bXcWdThKam3FEHwmE/qn1pYTspEfujVuk+0BAHcW
34bSvF8RoB15amIjv339V+ZaGrEv2mG92v+dvY8RXvn4bPVNeO8r3tu+4+4rwghd
tuiXMrd2Hvd1R0GWnoSPXvQgU2lnRW5k
-- [ END DIGITAL SIGNATURE ] --



[squid-users] Optimal parameters for Squid

2006-07-17 Thread Prabu

Hello All,

I need some help in configuring squid with optimal parameters,Is there 
any formulas or calculations


1)How to calculate the optimal cache_mem  that suits my environment?
2)Does increasing cache_memory beyond a certain value decrease squid 
performance?

3)How to calculate the optimal cache_dir  that suits my environment?
4)How many number of File Descriptors squid needs?
5)Is there any relation between File Descriptors and number of 
connections(number of request)?


I got some hints from squid Faq and from Henriks notes "11.4 Running out 
of filedescriptors" but need the details of how to configure the ulimit 
so that it suits my environment.


Even these things can be added to the Squid FAQ.It also helps 
squid-users in future.


Thanks in advance.

--Prabu.M.A
When I was born I was so surprised
I didnt talk for a period and half
-Gracie Allen


Re: [squid-users] Download always get disconnected through proxy

2006-07-17 Thread Henrik Nordstrom
mån 2006-07-17 klockan 11:12 +0800 skrev Yong Bong Fong:
> Dear friends,
> 
>Wondering if anyone else face smilar issue to me with downloading 
> problems through proxy. Many users complained to me that when they 
> download through proxy, they often get corrupted file or download 
> disconnected half way.
> any idea what went wrong?

Anything in cache.log?

REgards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] shockwave problem

2006-07-17 Thread Raj

Hello all,

I'm using Squid  Version 2.5.STABLE10 to do the Internet access, which means the
internet access request will go thru 2 proxies as one is the child and the
other is the parent.
PC>Squidproxy1>Squidproxy2->Internet
Here im encountering an problem. when a user tries to access
http://www.adobe.com/shockwave/welcome, it should install a shockwave
player from the Internet (www.macromedia.com). But it failed to
install shockwave. It displays a message "When you see the animation
playing below the labeled box, then your installation was successful.
I am not sure why it coulnd't install shockwave. I am also having
problems streaming Windows Media player.

I would appreciate it if someone help me fix this issue.

Thanks.