Re: [squid-users] LDAP or windbinds?

2008-04-21 Thread Adrian Chadd
The "official" way is to use winbind. The samba guys know more about
NTLM than us.

For basic auth? ldap is fine.



Adiran

On Mon, Apr 21, 2008, Dwyer, Simon wrote:
> Hi all,
> 
> I am trying to get my squid server to talk to AD.  It seems there are two
> ways of doing this .  Squid -> ldap -> kerberos -> ad or Squid -> winbinds
> -> kerberos -> ad.
> 
> Is there a prefered method or do both work the same?
> 
> Cheers,
> 
> Simon

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6 Stable19 - Part II

2008-04-21 Thread Thompson, Scott (WA)
Sorry in my previous post I assumed I was running 2.6 Stable 6 and I
wanted to u/g to Stable 19 but it appears I am running 2.5 Stable6 and I
want to u/g to Squid 2.6 Stable 19

I have found that when I run squid -v I get the following output

Squid Cache: Version 2.5.STABLE6
configure options:  --build=i686-redhat-linux-gnu
--host=i686-redhat-linux-gnu --target=i386-redhat-linux-gnu
--program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin
--sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share
--includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec
--localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man
--infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin
--libexecdir=/usr/lib/squid --localstatedir=/var --sysconfdir=/etc/squid
--enable-poll --enable-snmp --enable-removal-policies=heap,lru
--enable-storeio=aufs,coss,diskd,null,ufs --enable-ssl
--with-openssl=/usr/kerberos --enable-delay-pools
--enable-linux-netfilter --with-pthreads
--enable-ntlm-auth-helpers=SMB,winbind
--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group
,winbind_group --enable-auth=basic,ntlm --with-winbind-auth-challenge
--enable-useragent-log --enable-referer-log
--disable-dependency-tracking --enable-cachemgr-hostname=localhost
--disable-ident-lookups --enable-truncate --enable-underscores
--datadir=/usr/share
--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-dom
ain-NTLM,SASL,winbind

Does that mean I can just run ./configure from the folder in which I
extracted the Squid 2.6 Stable19 files with the above command line
switches and I will have Stable 19 installed? I assume I would have to
restart the squid service!

Any info would be greatly appreciated

Scott



Re: [squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6 Stable19 - Part II

2008-04-21 Thread Adrian Chadd
(Top-post)

Yes, that should work just fine.



Adrian

On Mon, Apr 21, 2008, Thompson, Scott (WA) wrote:
> Sorry in my previous post I assumed I was running 2.6 Stable 6 and I
> wanted to u/g to Stable 19 but it appears I am running 2.5 Stable6 and I
> want to u/g to Squid 2.6 Stable 19
> 
> I have found that when I run squid -v I get the following output
> 
> Squid Cache: Version 2.5.STABLE6
> configure options:  --build=i686-redhat-linux-gnu
> --host=i686-redhat-linux-gnu --target=i386-redhat-linux-gnu
> --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin
> --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share
> --includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec
> --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man
> --infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin
> --libexecdir=/usr/lib/squid --localstatedir=/var --sysconfdir=/etc/squid
> --enable-poll --enable-snmp --enable-removal-policies=heap,lru
> --enable-storeio=aufs,coss,diskd,null,ufs --enable-ssl
> --with-openssl=/usr/kerberos --enable-delay-pools
> --enable-linux-netfilter --with-pthreads
> --enable-ntlm-auth-helpers=SMB,winbind
> --enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group
> ,winbind_group --enable-auth=basic,ntlm --with-winbind-auth-challenge
> --enable-useragent-log --enable-referer-log
> --disable-dependency-tracking --enable-cachemgr-hostname=localhost
> --disable-ident-lookups --enable-truncate --enable-underscores
> --datadir=/usr/share
> --enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-dom
> ain-NTLM,SASL,winbind
> 
> Does that mean I can just run ./configure from the folder in which I
> extracted the Squid 2.6 Stable19 files with the above command line
> switches and I will have Stable 19 installed? I assume I would have to
> restart the squid service!
> 
> Any info would be greatly appreciated
> 
> Scott

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Marking Cached traffic..

2008-04-21 Thread Henrik Nordstrom
ons 2008-04-16 klockan 16:24 +0200 skrev Stephan Viljoen:
> I was wondering whether it's posible to mark cached traffic with a different 
> TOS then uncached traffic. I need to come up with a way of passing cached 
> traffic through our bandwidth manager without taxing the end user for it. 
> Basically giving them the full benefits of the proxy server.

This part of the zph patch has been cleaned up and merged into 2.7.

Thanks to this discussion reminding me about the patch.

Regards
Henrik



Re: [squid-users] Upgrade from Stable6 to stable19

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 14:25 +0800 skrev Thompson, Scott (WA):
> Stupid question I am sure, but Linux is not one of my strong points
> Is there a good link for some doco on how to upgrade Squid from Stable 6
> to Stable 19?
> Do I have to reinstall and recompile?

2.6.STABLE19 understands 2.6.STABLE6 configurations without any change.
Just upgrade Squid and restart it..

How to best upgrade Squid depends on how you installed it in the first
place. I.e. if you installed Squid as a OS vendor provided binary, or by
hand from source. If OS vendor provided then find an upgrade for your
OS.

Regards
Henrik



Re: [squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6 Stable19 - Part II

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 15:32 +0800 skrev Thompson, Scott (WA):
> Sorry in my previous post I assumed I was running 2.6 Stable 6 and I
> wanted to u/g to Stable 19 but it appears I am running 2.5 Stable6 and I
> want to u/g to Squid 2.6 Stable 19

What OS are you running?

Aparently some RedHat based Linux, but which one, and which version?

Regards
Henrik



RE: [squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6Stable19 - Part II

2008-04-21 Thread Thompson, Scott (WA)
#cat /proc/version
Linux version 2.6.9-11.EL ([EMAIL PROTECTED]) (gcc version 3.4.3 20050227 (Red 
Hat 3.4.3-22)) #1 Wed Jun 8 16:59:52 CDT 2005


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, 21 April 2008 4:18 PM
To: Thompson, Scott (WA)
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6Stable19 
- Part II

mån 2008-04-21 klockan 15:32 +0800 skrev Thompson, Scott (WA):
> Sorry in my previous post I assumed I was running 2.6 Stable 6 and I
> wanted to u/g to Stable 19 but it appears I am running 2.5 Stable6 and I
> want to u/g to Squid 2.6 Stable 19

What OS are you running?

Aparently some RedHat based Linux, but which one, and which version?

Regards
Henrik



RE: [squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6Stable19 - Part II

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 16:25 +0800 skrev Thompson, Scott (WA):
> #cat /proc/version
> Linux version 2.6.9-11.EL ([EMAIL PROTECTED]) (gcc version 3.4.3 20050227 
> (Red Hat 3.4.3-22)) #1 Wed Jun 8 16:59:52 CDT 2005

Then the update for RHEL should probably work for you:

http://www.squid-cache.org/Download/binaries.dyn


Regards
Henrik



[squid-users] Help Needed: Any suggestion on performance downgrade after enable Cache Digest?

2008-04-21 Thread Zhou, Bo(Bram)
Hi,

Recently I did some interesting performance testing on the Squid configured
with Cache Digest Enabled. The testing result shows that the Squid use more
than 20% CPU time than the Squid running without Cache Digest. Following are
my detailed testing environment and configuration and result. Anyone can
give me some light on the possible reason will be greatly appreciated.
Please also point out the possible configuration errors if any. Thanks a
lot.

1. Hardware configuration : HP DL380
(1) Squid Server
CPU: 2 Xeon 2.8GHz CPUs, each Xeon CPU has 2 Cores
Memory size: 6G, Disk: 36G, NIC: 1000M
(2) Client and Web Server : Dell Vostro200 running with Web Polygraph 3.1.5

2. Squid Configuration
(1) 2 Squid instances are running on the same HP server, each using same IP
address but different PORT, pure in memory cache
Squid1 configuration: 
http_port 8081
cache_mem 1024 MB
cache_dir null /tmp
cache_peer 192.168.10.2 sibling   8082  0 proxy-only
digest_generation on
digest_bits_per_entry 5
digest_rebuild_period 1 hour
digest_swapout_chunk_size 4096 bytes
digest_rebuild_chunk_percentage 10

Squid2 configuration:
http_port 8082
cache_mem 1024 MB
cache_dir null /tmp
cache_peer 192.168.10.2 sibling   8081  0 proxy-only
digest_generation on
digest_bits_per_entry 5
digest_rebuild_period 1 hour
digest_swapout_chunk_size 4096 bytes
digest_rebuild_chunk_percentage 10

3. 2 Polygraph Clients are used to send HTTP requests to Squid instances.
Different client send request to different Squid instance. Each client
configures 1000 users with 1.2 request/s, so totally each client send 1200
requests/s.

4. Test result (Note: since 4 CPU used on the server, the total CPU
utilization is 400%)
(1) Running 2 Squid instances with Cache Digest Enabled, each handles 1200
request/second: 
Each instance used ~95% CPU even during the time Squid didn't rebuild the
digest 

(2) Running 2 Squid instances with Cache Digest Enabled, one handles 1200
request/second, one is idle(no traffic to it)
The one with traffic has CPU utilization ~65%, the other one is idle

(3) Running 2 Squid instances with Cache Digest Disabled, each handles 1200
request/second:
Each instance used ~75% CPU


Best Regards,
Bo Zhou




RE: [squid-users] Help Needed: Any suggestion on performance downgrade after enable Cache Digest?

2008-04-21 Thread Zhou, Bo(Bram)
One thing missed: I'm using Squid 2.6 STABLE 19. Thanks.

Best Regards,
Bo Zhou
Tel: +86-10-62295296 ext.5708
Fax: +86-10-950507 ext.336690 
  
 
 
Confidential and Proprietary Information Notice: This email and any attached
files are confidential and intended solely for the use of the individual or
entity to which they are addressed. If this message has reached a person or
persons not designated above, you are hereby notified that you have received
this document in error and that any review, dissemination, distribution or
copying of this message is strictly prohibited. If you are not a designated
recipient, please notify CIeNET immediately by reply e-mail and destroy all
copies of the original message. Thank you for your cooperation. 

>> -Original Message-
>> From: Zhou, Bo(Bram) [mailto:[EMAIL PROTECTED]
>> Sent: Monday, April 21, 2008 6:48 PM
>> To: squid-users@squid-cache.org
>> Subject: [squid-users] Help Needed: Any suggestion on performance
downgrade
>> after enable Cache Digest?
>> 
>> Hi,
>> 
>> Recently I did some interesting performance testing on the Squid
configured
>> with Cache Digest Enabled. The testing result shows that the Squid use
more
>> than 20% CPU time than the Squid running without Cache Digest. Following
are
>> my detailed testing environment and configuration and result. Anyone can
>> give me some light on the possible reason will be greatly appreciated.
>> Please also point out the possible configuration errors if any. Thanks a
>> lot.
>> 
>> 1. Hardware configuration : HP DL380
>> (1) Squid Server
>> CPU: 2 Xeon 2.8GHz CPUs, each Xeon CPU has 2 Cores
>> Memory size: 6G, Disk: 36G, NIC: 1000M
>> (2) Client and Web Server : Dell Vostro200 running with Web Polygraph
3.1.5
>> 
>> 2. Squid Configuration
>> (1) 2 Squid instances are running on the same HP server, each using same
IP
>> address but different PORT, pure in memory cache
>> Squid1 configuration:
>> http_port 8081
>> cache_mem 1024 MB
>> cache_dir null /tmp
>> cache_peer 192.168.10.2  sibling   8082  0 proxy-only
>> digest_generation on
>> digest_bits_per_entry 5
>> digest_rebuild_period 1 hour
>> digest_swapout_chunk_size 4096 bytes
>> digest_rebuild_chunk_percentage 10
>> 
>> Squid2 configuration:
>> http_port 8082
>> cache_mem 1024 MB
>> cache_dir null /tmp
>> cache_peer 192.168.10.2  sibling   8081  0 proxy-only
>> digest_generation on
>> digest_bits_per_entry 5
>> digest_rebuild_period 1 hour
>> digest_swapout_chunk_size 4096 bytes
>> digest_rebuild_chunk_percentage 10
>> 
>> 3. 2 Polygraph Clients are used to send HTTP requests to Squid instances.
>> Different client send request to different Squid instance. Each client
>> configures 1000 users with 1.2 request/s, so totally each client send
1200
>> requests/s.
>> 
>> 4. Test result (Note: since 4 CPU used on the server, the total CPU
>> utilization is 400%)
>> (1) Running 2 Squid instances with Cache Digest Enabled, each handles
1200
>> request/second:
>> Each instance used ~95% CPU even during the time Squid didn't rebuild the
>> digest
>> 
>> (2) Running 2 Squid instances with Cache Digest Enabled, one handles 1200
>> request/second, one is idle(no traffic to it)
>> The one with traffic has CPU utilization ~65%, the other one is idle
>> 
>> (3) Running 2 Squid instances with Cache Digest Disabled, each handles
1200
>> request/second:
>> Each instance used ~75% CPU
>> 
>> 
>> Best Regards,
>> Bo Zhou
>> 
>> 





Re: [squid-users] how to check virus using squid

2008-04-21 Thread Henrik K
On Mon, Apr 21, 2008 at 12:11:49AM +0200, Henrik Nordstrom wrote:
> tor 2008-04-17 klockan 08:02 -0300 skrev Cassiano Martin:
> 
> > Its a anti-virus proxy wich uses clamav. You can use it together with squid.
> 
> Or better yet, if using Squid-3 you can plug clamav directly into Squid
> using ICAP and the C-ICAP project...

Well, each has pros and cons, so I woudn't call anything "better"..

For example, HAVP can also use other scanners, scan ZIPs partially and has
slightly more intelligent "buffering".

C-ICAP uses a more "native" protocol, but that's about it. It hasn't been
even updated to work with ClamAV 0.93 yet.



[squid-users] Does anyone know how to make https work?

2008-04-21 Thread Brian Lu

Hi All
I meet a problem:when I use https to access the web pages,my IE always show 
me:

1.If setuped cache_peer:
錯誤
欲連結之網址(URL)無法正確的傳回

當嘗試傳回下面的網址(URL)時: 
https://www.chb.com.tw/wcm/web/home/index.html

發生了下列的錯誤:
Unsupported Request Method and Protocol
尚未支援的要求方式或通訊協定
Squid does not support all request methods for all access protocols. For 
example, you can not POST a Gopher request.

因為 Squid (網路快取程式)並未支援所有的連結要求方式在各式通訊協定上。比如說,你不能要求一個 GOPHER 的 POST 連結要求。

Generated Mon, 21 Apr 2008 05:22:30 GMT by proxy.seed.net.tw 
(squid/2.5.STABLE11)


2.If no cache_peer:
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: 
https://www.chb.com.tw/wcm/web/home/index.html

The following error was encountered:
Connection to 210.65.204.245 Failed
The system returned:
   (71) Protocol error
The remote host or network may be down. Please try the request again.
Your cache administrator is .

Generated Mon, 21 Apr 2008 05:18:30 GMT by 192.168.1.254 (squid/3.0.STABLE2)

My squid version:
[EMAIL PROTECTED] ]# squid -v
Squid Cache: Version 3.0.STABLE2
configure options:  '--enable-ssl' '--enable-linux-netfilter' 
'--enable-referer-log'


My squid.conf:
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access deny all
http_reply_access allow all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 3128 transparent
https_port 3129 cert=/usr/local/squid/etc/cert.pem 
key=/usr/local/squid/etc/key.pem transparent

hierarchy_stoplist cgi-bin ?
cache_mem 10 MB
cache_dir ufs /var/spool/squid 10 8 64
logformat squid  Time:%tl Local:%>a:%>p Destination:%la:%lp RqstMethod:%rm 
%Ss/%03Hs RqstTime:%5tr(milliseconds) RplySize:%logformat squidmime  %ts.%03tu %6tr %>a %Ss/%03Hs %%mt [%>h] [%
logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh

#access_log /usr/local/squid/var/logs/access.log squid
access_log syslog:local5.info squid
cache_log /usr/local/squid/var/logs/cache.log
cache_store_log none
emulate_httpd_log on
log_ip_on_direct on
pid_filename /var/run/squid.pid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
connect_timeout 15 seconds
cache_effective_user squid
cache_effective_group squid
icp_port 3130
#never_direct allow all
#coredump_dir /usr/local/squid/var/cache
cache_peer 61.219.36.120 parent 80 3130 no-query no-netdb-exchange no-digest 
weight=1 round-robin connect-timeout=15
cache_peer 139.175.55.210 parent 8080 3130 no-query no-netdb-exchange 
no-digest weight=6 round-robin connect-timeout=15

.

Does anyone know how to make https work? thank you very much~

Best regards,
Brian Lu 



Re: [squid-users] Upgrade from Squid 2.5 Stable6 to Squid 2.6Stable19 - Part II

2008-04-21 Thread Amos Jeffries

Henrik Nordstrom wrote:

mån 2008-04-21 klockan 16:25 +0800 skrev Thompson, Scott (WA):

#cat /proc/version
Linux version 2.6.9-11.EL ([EMAIL PROTECTED]) (gcc version 3.4.3 20050227 (Red 
Hat 3.4.3-22)) #1 Wed Jun 8 16:59:52 CDT 2005


Then the update for RHEL should probably work for you:

http://www.squid-cache.org/Download/binaries.dyn



If not the full configure and compile instructions are in the Wiki FAQ:

http://wiki.squid-cache.org/SquidFaq/CompilingSquid
http://wiki.squid-cache.org/SquidFaq/InstallingSquid

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Chat Apps getting blocked

2008-04-21 Thread Amos Jeffries

Odhiambo Washington wrote:

Hello List,

I copycat(ed) a squid.conf from this list a few days ago and did
minimal config mods just to allow my network to use it. It works great
with youtube caching, but stranegly, it blocks MSN/Yahoo chats, but I
sincerely cannot see where this is happening. The file can be access
from the following URL:

https://212.22.160.35/~wash/squid.conf.txt
(I use a self-signed certificate, so please just accept it)

I get the following in the access log:

1208510066.248   7255 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.110.28:1863 - NONE/- text/html
1208510066.726   7850 192.168.0.150 TCP_DENIED/403 1422 CONNECT
207.46.110.89:1863 - NONE/- text/html
1208510100.571847 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.110.94:1863 - NONE/- text/html
1208510119.339 28 192.168.0.150 TCP_DENIED/403 1422 CONNECT
207.46.110.94:1863 - NONE/- text/html
1208510173.114853 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.108.13:1863 - NONE/- text/html
1208510216.270668 192.168.0.150 TCP_DENIED/403 1422 CONNECT
207.46.108.85:1863 - NONE/- text/html
1208510300.314852 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.108.97:1863 - NONE/- text/html
1208510347.723853 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.108.86:1863 - NONE/- text/html
1208510371.584823 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.108.66:1863 - NONE/- text/html
1208510408.981 20 192.168.0.150 TCP_DENIED/403 1422 CONNECT
207.46.108.97:1863 - NONE/- text/html
1208510413.535   1673 192.168.0.106 TCP_DENIED/403 1422 CONNECT
207.46.108.93:1863 - NONE/- text/html
1208510488.270 19 192.168.0.106 TCP_DENIED/403 1438 CONNECT
messenger.hotmail.com:1863 - NONE/- text/html
1208510609.843  0 192.168.0.117 TCP_DENIED/403 1426 CONNECT
talk.google.com:5222 - NONE/- text/html
1208510609.844  0 192.168.0.117 TCP_DENIED/403 1430 CONNECT
scs.msg.yahoo.com:5050 - NONE/- text/html
1208510616.495  0 192.168.0.117 TCP_DENIED/403 1426 CONNECT
talk.google.com:5222 - NONE/- text/html
1208510617.057  1 192.168.0.117 TCP_DENIED/403 1430 CONNECT
scs.msg.yahoo.com:5050 - NONE/- text/html
1208510637.734 20 192.168.0.106 TCP_DENIED/403 1438 CONNECT
messenger.hotmail.com:1863 - NONE/- text/html
1208510643.865 31 192.168.0.106 TCP_DENIED/403 1438 CONNECT
messenger.hotmail.com:1863 - NONE/- text/html
1208510676.014  0 192.168.0.117 TCP_DENIED/403 1430 CONNECT
scs.msg.yahoo.com:5050 - NONE/- text/html




Where in the acls is this coming from?



You have:
  http_access deny CONNECT !SSL_ports

If you really want to allow the chat programs out, then you will need to 
add an acl for their domain/ports and allow CONNECT for them.


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Chat Apps getting blocked

2008-04-21 Thread Odhiambo Washington
On Mon, Apr 21, 2008 at 4:13 PM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>
> Odhiambo Washington wrote:
>
> > Hello List,
> >
> > I copycat(ed) a squid.conf from this list a few days ago and did
> > minimal config mods just to allow my network to use it. It works great
> > with youtube caching, but stranegly, it blocks MSN/Yahoo chats, but I
> > sincerely cannot see where this is happening. The file can be access
> > from the following URL:
> >
> > https://212.22.160.35/~wash/squid.conf.txt
> > (I use a self-signed certificate, so please just accept it)
> >
> > I get the following in the access log:
> >
> > 1208510066.248   7255 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.110.28:1863 - NONE/- text/html
> > 1208510066.726   7850 192.168.0.150 TCP_DENIED/403 1422 CONNECT
> > 207.46.110.89:1863 - NONE/- text/html
> > 1208510100.571847 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.110.94:1863 - NONE/- text/html
> > 1208510119.339 28 192.168.0.150 TCP_DENIED/403 1422 CONNECT
> > 207.46.110.94:1863 - NONE/- text/html
> > 1208510173.114853 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.13:1863 - NONE/- text/html
> > 1208510216.270668 192.168.0.150 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.85:1863 - NONE/- text/html
> > 1208510300.314852 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.97:1863 - NONE/- text/html
> > 1208510347.723853 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.86:1863 - NONE/- text/html
> > 1208510371.584823 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.66:1863 - NONE/- text/html
> > 1208510408.981 20 192.168.0.150 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.97:1863 - NONE/- text/html
> > 1208510413.535   1673 192.168.0.106 TCP_DENIED/403 1422 CONNECT
> > 207.46.108.93:1863 - NONE/- text/html
> > 1208510488.270 19 192.168.0.106 TCP_DENIED/403 1438 CONNECT
> > messenger.hotmail.com:1863 - NONE/- text/html
> > 1208510609.843  0 192.168.0.117 TCP_DENIED/403 1426 CONNECT
> > talk.google.com:5222 - NONE/- text/html
> > 1208510609.844  0 192.168.0.117 TCP_DENIED/403 1430 CONNECT
> > scs.msg.yahoo.com:5050 - NONE/- text/html
> > 1208510616.495  0 192.168.0.117 TCP_DENIED/403 1426 CONNECT
> > talk.google.com:5222 - NONE/- text/html
> > 1208510617.057  1 192.168.0.117 TCP_DENIED/403 1430 CONNECT
> > scs.msg.yahoo.com:5050 - NONE/- text/html
> > 1208510637.734 20 192.168.0.106 TCP_DENIED/403 1438 CONNECT
> > messenger.hotmail.com:1863 - NONE/- text/html
> > 1208510643.865 31 192.168.0.106 TCP_DENIED/403 1438 CONNECT
> > messenger.hotmail.com:1863 - NONE/- text/html
> > 1208510676.014  0 192.168.0.117 TCP_DENIED/403 1430 CONNECT
> > scs.msg.yahoo.com:5050 - NONE/- text/html
> >
>  
>
>
> >
> > Where in the acls is this coming from?
> >
> >
>
>  You have:
>   http_access deny CONNECT !SSL_ports
>
>  If you really want to allow the chat programs out, then you will need to
> add an acl for their domain/ports and allow CONNECT for them.

Hi Amos,

Thank you so much. This now works after I created an ACL for them.

PS: Does everyone on this list get some e-mail from ANTIGEN blah on
some exchange server whenever they send mail to the list or is it just
me?

For every post to the list, I get a response with the following data
in the body:


Microsoft Antigen for Exchange found a message matching a filter. The
message is currently Identified.
Message: "SUSPECT MAIL_ _squid_users_ Access Controls using MAC address"
Filter name: "KEYWORD= profanity: bastards;sexual discrimination: bastards"
Sent from: "Odhiambo Washington"
Folder: "SMTP Messages\Inbound"
Location: "tesco/First Administrative Group/SW2KE"


It's very annoying and I always wonder if squid-users is hosted on a
M$ Exchange platform:-)
Anyone has a clue as to why I always get this?




-- 
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254733744121/+254722743223
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

"Oh My God! They killed init! You Bastards!"
 --from a /. post


Re: [squid-users] Help Needed: Any suggestion on performance downgrade after enable Cache Digest?

2008-04-21 Thread Alex Rousskov
On Mon, 2008-04-21 at 18:48 +0800, Zhou, Bo(Bram) wrote:

> Recently I did some interesting performance testing on the Squid configured
> with Cache Digest Enabled. The testing result shows that the Squid use more
> than 20% CPU time than the Squid running without Cache Digest. 

Thank you for posting the results with a rather detailed description
(please consider also posting Polygraph workload next time you do this).

> Following are
> my detailed testing environment and configuration and result. Anyone can
> give me some light on the possible reason will be greatly appreciated.

Besides fetching peer digests and rebuilding the local digest, using
digests requires Squid to do the following for each "cachable" request:
- compute the digest key (should be cheap)
- lookup the digest hash tables (should be cheap for one peer)
- for CD hits, ask the peer for the object (expensive)
- update the digest (cheap)

As far as I understand, your test setup measured the sum of "cheap"
overheads but did not measure the expensive part. Perhaps more
importantly, the test did not allow for any cache digest hits so you are
comparing no-digest Squid with a useless-digest Squid. It would be nice
if you run a test where all Polygraph Robots request URLs that can be in
both peer caches and where a hit has much lower response time (because
there is no artificial server-side delay). Depending on your hit ratio
and other factors, you may see significant overall improvement despite
the overheads (otherwise peering would be useless).


I am not a big fan of CPU utilization as the primary measurement because
it can bite you if the program does more "select" loops than needed when
not fully loaded. I would recommend focusing on response time while
using CPU utilization as an internal/secondary measurement. However,
let's assume that in you particular tests CPU utilization is a good
metric (it can be!).


20% CPU utilization increase is more than I would expect if there are no
peer queries. On the other hand, you also report 30% CPU increase when
two peers are busy (test1 versus test2). Thus, your test precision
itself can be within that 20% bracket. It would be interesting to see
test4 with one busy and one idle no-digest proxy.

If you can modify the code a little, it should be fairly easy to isolate
the core reason for the CPU utilization increase compared to a no-digest
SquidProfiling may lead to similar results.

For example, I would disable all digest lookups (return "not found"
immediately) and local updates (do nothing) to make sure the CPU
utilization matches that of a no-digests tests. If CPU usage in that
test goes down about 20%, the next step would be to check whether it is
the lookup, the updates, or both. I would leave the lookup off but
reenable the updates and see what happens. Again, profiling may allow
you to do similar preliminary analysis without rerunning the test.

HTH,

Alex.


> Please also point out the possible configuration errors if any. Thanks a
> lot.
> 
> 1. Hardware configuration : HP DL380
> (1) Squid Server
> CPU: 2 Xeon 2.8GHz CPUs, each Xeon CPU has 2 Cores
> Memory size: 6G, Disk: 36G, NIC: 1000M
> (2) Client and Web Server : Dell Vostro200 running with Web Polygraph 3.1.5
> 
> 2. Squid Configuration
> (1) 2 Squid instances are running on the same HP server, each using same IP
> address but different PORT, pure in memory cache
> Squid1 configuration: 
> http_port 8081
> cache_mem 1024 MB
> cache_dir null /tmp
> cache_peer 192.168.10.2   sibling   8082  0 proxy-only
> digest_generation on
> digest_bits_per_entry 5
> digest_rebuild_period 1 hour
> digest_swapout_chunk_size 4096 bytes
> digest_rebuild_chunk_percentage 10
> 
> Squid2 configuration:
> http_port 8082
> cache_mem 1024 MB
> cache_dir null /tmp
> cache_peer 192.168.10.2   sibling   8081  0 proxy-only
> digest_generation on
> digest_bits_per_entry 5
> digest_rebuild_period 1 hour
> digest_swapout_chunk_size 4096 bytes
> digest_rebuild_chunk_percentage 10
> 
> 3. 2 Polygraph Clients are used to send HTTP requests to Squid instances.
> Different client send request to different Squid instance. Each client
> configures 1000 users with 1.2 request/s, so totally each client send 1200
> requests/s.
> 
> 4. Test result (Note: since 4 CPU used on the server, the total CPU
> utilization is 400%)
> (1) Running 2 Squid instances with Cache Digest Enabled, each handles 1200
> request/second: 
> Each instance used ~95% CPU even during the time Squid didn't rebuild the
> digest 
> 
> (2) Running 2 Squid instances with Cache Digest Enabled, one handles 1200
> request/second, one is idle(no traffic to it)
> The one with traffic has CPU utilization ~65%, the other one is idle
> 
> (3) Running 2 Squid instances with Cache Digest Disabled, each handles 1200
> request/second:
> Each instance used ~75% CPU
> 
> 
> Best Regards,
> Bo Zhou
> 



RE: [squid-users] Help Needed: Any suggestion on performancedowngrade after enable Cache Digest?

2008-04-21 Thread Zhou, Bo(Bram)
Alex,

Thanks for your quick response and good suggestion. You are definitely right
that I'm testing with a useless-digest Squid but with high cpu utilization
which I did not expect. I will do more testing and profiling with modified
Squid as you suggested in the next coming days, collect more data other than
cpu utilization. Thanks again.

Best Regards,
Bo Zhou

>> -Original Message-
>> From: Alex Rousskov [mailto:[EMAIL PROTECTED]
>> Sent: Monday, April 21, 2008 10:31 PM
>> To: Zhou, Bo(Bram)
>> Cc: squid-users@squid-cache.org
>> Subject: Re: [squid-users] Help Needed: Any suggestion on
>> performancedowngrade after enable Cache Digest?
>> 
>> On Mon, 2008-04-21 at 18:48 +0800, Zhou, Bo(Bram) wrote:
>> 
>> > Recently I did some interesting performance testing on the Squid
configured
>> > with Cache Digest Enabled. The testing result shows that the Squid use
more
>> > than 20% CPU time than the Squid running without Cache Digest.
>> 
>> Thank you for posting the results with a rather detailed description
>> (please consider also posting Polygraph workload next time you do this).
>> 
>> > Following are
>> > my detailed testing environment and configuration and result. Anyone
can
>> > give me some light on the possible reason will be greatly appreciated.
>> 
>> Besides fetching peer digests and rebuilding the local digest, using
>> digests requires Squid to do the following for each "cachable" request:
>> - compute the digest key (should be cheap)
>> - lookup the digest hash tables (should be cheap for one peer)
>> - for CD hits, ask the peer for the object (expensive)
>> - update the digest (cheap)
>> 
>> As far as I understand, your test setup measured the sum of "cheap"
>> overheads but did not measure the expensive part. Perhaps more
>> importantly, the test did not allow for any cache digest hits so you are
>> comparing no-digest Squid with a useless-digest Squid. It would be nice
>> if you run a test where all Polygraph Robots request URLs that can be in
>> both peer caches and where a hit has much lower response time (because
>> there is no artificial server-side delay). Depending on your hit ratio
>> and other factors, you may see significant overall improvement despite
>> the overheads (otherwise peering would be useless).
>> 
>> 
>> I am not a big fan of CPU utilization as the primary measurement because
>> it can bite you if the program does more "select" loops than needed when
>> not fully loaded. I would recommend focusing on response time while
>> using CPU utilization as an internal/secondary measurement. However,
>> let's assume that in you particular tests CPU utilization is a good
>> metric (it can be!).
>> 
>> 
>> 20% CPU utilization increase is more than I would expect if there are no
>> peer queries. On the other hand, you also report 30% CPU increase when
>> two peers are busy (test1 versus test2). Thus, your test precision
>> itself can be within that 20% bracket. It would be interesting to see
>> test4 with one busy and one idle no-digest proxy.
>> 
>> If you can modify the code a little, it should be fairly easy to isolate
>> the core reason for the CPU utilization increase compared to a no-digest
>> SquidProfiling may lead to similar results.
>> 
>> For example, I would disable all digest lookups (return "not found"
>> immediately) and local updates (do nothing) to make sure the CPU
>> utilization matches that of a no-digests tests. If CPU usage in that
>> test goes down about 20%, the next step would be to check whether it is
>> the lookup, the updates, or both. I would leave the lookup off but
>> reenable the updates and see what happens. Again, profiling may allow
>> you to do similar preliminary analysis without rerunning the test.
>> 
>> HTH,
>> 
>> Alex.
>> 
>> 
>> > Please also point out the possible configuration errors if any. Thanks
a
>> > lot.
>> >
>> > 1. Hardware configuration : HP DL380
>> > (1) Squid Server
>> > CPU: 2 Xeon 2.8GHz CPUs, each Xeon CPU has 2 Cores
>> > Memory size: 6G, Disk: 36G, NIC: 1000M
>> > (2) Client and Web Server : Dell Vostro200 running with Web Polygraph
3.1.5
>> >
>> > 2. Squid Configuration
>> > (1) 2 Squid instances are running on the same HP server, each using
same
>> IP
>> > address but different PORT, pure in memory cache
>> > Squid1 configuration:
>> > http_port 8081
>> > cache_mem 1024 MB
>> > cache_dir null /tmp
>> > cache_peer 192.168.10.2sibling   8082  0 proxy-only
>> > digest_generation on
>> > digest_bits_per_entry 5
>> > digest_rebuild_period 1 hour
>> > digest_swapout_chunk_size 4096 bytes
>> > digest_rebuild_chunk_percentage 10
>> >
>> > Squid2 configuration:
>> > http_port 8082
>> > cache_mem 1024 MB
>> > cache_dir null /tmp
>> > cache_peer 192.168.10.2sibling   8081  0 proxy-only
>> > digest_generation on
>> > digest_bits_per_entry 5
>> > digest_rebuild_period 1 hour
>> > digest_swapout_chunk_size 4096 bytes
>> > digest_rebuild_chunk_percentage 10
>> >
>> > 3. 2 Polygraph Clients are 

Re: [squid-users] Help Needed: Any suggestion on performance downgrade after enable Cache Digest?

2008-04-21 Thread Adrian Chadd

Which OS?
If Linux, did you start looking at the CPU use using oprofile?


Adrian

On Mon, Apr 21, 2008, Zhou, Bo(Bram) wrote:
> Hi,
> 
> Recently I did some interesting performance testing on the Squid configured
> with Cache Digest Enabled. The testing result shows that the Squid use more
> than 20% CPU time than the Squid running without Cache Digest. Following are
> my detailed testing environment and configuration and result. Anyone can
> give me some light on the possible reason will be greatly appreciated.
> Please also point out the possible configuration errors if any. Thanks a
> lot.
> 
> 1. Hardware configuration : HP DL380
> (1) Squid Server
> CPU: 2 Xeon 2.8GHz CPUs, each Xeon CPU has 2 Cores
> Memory size: 6G, Disk: 36G, NIC: 1000M
> (2) Client and Web Server : Dell Vostro200 running with Web Polygraph 3.1.5
> 
> 2. Squid Configuration
> (1) 2 Squid instances are running on the same HP server, each using same IP
> address but different PORT, pure in memory cache
> Squid1 configuration: 
> http_port 8081
> cache_mem 1024 MB
> cache_dir null /tmp
> cache_peer 192.168.10.2   sibling   8082  0 proxy-only
> digest_generation on
> digest_bits_per_entry 5
> digest_rebuild_period 1 hour
> digest_swapout_chunk_size 4096 bytes
> digest_rebuild_chunk_percentage 10
> 
> Squid2 configuration:
> http_port 8082
> cache_mem 1024 MB
> cache_dir null /tmp
> cache_peer 192.168.10.2   sibling   8081  0 proxy-only
> digest_generation on
> digest_bits_per_entry 5
> digest_rebuild_period 1 hour
> digest_swapout_chunk_size 4096 bytes
> digest_rebuild_chunk_percentage 10
> 
> 3. 2 Polygraph Clients are used to send HTTP requests to Squid instances.
> Different client send request to different Squid instance. Each client
> configures 1000 users with 1.2 request/s, so totally each client send 1200
> requests/s.
> 
> 4. Test result (Note: since 4 CPU used on the server, the total CPU
> utilization is 400%)
> (1) Running 2 Squid instances with Cache Digest Enabled, each handles 1200
> request/second: 
> Each instance used ~95% CPU even during the time Squid didn't rebuild the
> digest 
> 
> (2) Running 2 Squid instances with Cache Digest Enabled, one handles 1200
> request/second, one is idle(no traffic to it)
> The one with traffic has CPU utilization ~65%, the other one is idle
> 
> (3) Running 2 Squid instances with Cache Digest Disabled, each handles 1200
> request/second:
> Each instance used ~75% CPU
> 
> 
> Best Regards,
> Bo Zhou
> 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Proxy Auth sniffable?

2008-04-21 Thread Andreas Pettersson
Is the browser sending username and password in cleartext or a simple
base64 encoding when user authenticaties with proxy authentication
against an ldap directory?

-- 
Andreas




Re: [squid-users] how to check virus using squid

2008-04-21 Thread Christos Tsantilas

Henrik K wrote:

On Mon, Apr 21, 2008 at 12:11:49AM +0200, Henrik Nordstrom wrote:

tor 2008-04-17 klockan 08:02 -0300 skrev Cassiano Martin:


Its a anti-virus proxy wich uses clamav. You can use it together with squid.

Or better yet, if using Squid-3 you can plug clamav directly into Squid
using ICAP and the C-ICAP project...


Well, each has pros and cons, so I woudn't call anything "better"..

For example, HAVP can also use other scanners, scan ZIPs partially and has
slightly more intelligent "buffering".

C-ICAP uses a more "native" protocol, but that's about it. It hasn't been
even updated to work with ClamAV 0.93 yet.


Yep, clamav developers changed the clamav library api  Please if you 
find again such problems, report them to the c-icap mailing list.


A patch will be available soon in c-icap's sourceforge download page.

--
 Christos


Re: [squid-users] Proxy Auth sniffable?

2008-04-21 Thread Guido Serassio

Hi,

At 17:59 21/04/2008, Andreas Pettersson wrote:

Is the browser sending username and password in cleartext or a simple
base64 encoding when user authenticaties with proxy authentication
against an ldap directory?


Yes, as any basic authentication helper.

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] Force cache reload for object from browser

2008-04-21 Thread Paul Bryson
I am wondering if there is any way (standard or not) to get a web 
browser to force a web cache to check for an updated version of an 
object. I'm using a Squid proxy that I do not have control over, and I'm 
trying to grab a file from a website, but the cache keeps handing me an 
older version of the file. (A .zip file, so it is saved by the browser 
to be opened in an external application.)


If this were a webpage, it's pretty standard with browsers to hit 
Ctrl-F5 to tell the proxy to grab a new version.  But with files that 
aren't loaded in the browser, is there any way to tell it to grab a 
newer version?



Atamido



Re: [squid-users] Force cache reload for object from browser

2008-04-21 Thread Henrik Nordstrom

mån 2008-04-21 klockan 14:03 -0500 skrev Paul Bryson:
> I am wondering if there is any way (standard or not) to get a web 
> browser to force a web cache to check for an updated version of an 
> object.

The reload button generally does that. Or if that fails Control+Reload
or Shift+Reload depending on browser and OS...

But the proxy admin MAY have tweaked their proxy to bend the rules and
ignore this...

> If this were a webpage, it's pretty standard with browsers to hit 
> Ctrl-F5 to tell the proxy to grab a new version.  But with files that 
> aren't loaded in the browser, is there any way to tell it to grab a 
> newer version?

Good question how to ask a browser to do a reload of a non-displayable
object...

Regards
Henrik



[squid-users] Re: Force cache reload for object from browser

2008-04-21 Thread Paul Bryson

Henrik Nordstrom wrote:

Good question how to ask a browser to do a reload of a non-displayable
object...


Heck, it doesn't really even need to even be a browser (though that 
would be most universally useful).  I just need some way to tell the 
proxy to grab a new version of the file.



Atamido



Re: [squid-users] Problem with Restarted Squid Stable 2.6_19 Add

2008-04-21 Thread Nicole

On 21-Apr-08 My Secret NSA Wiretap Overheard Adrian Chadd Saying  :
> On Sun, Apr 20, 2008, Nicole wrote:
> 
>> > I took a look at this over the weekend (whilst looking at other stuff in
>> > the
>> > storage code) and I could -probably- make the AUFS swaplog parsing case
>> > much
>> > faster. I've just got other priorities at the moment (ie, lots more
>> > cleaning
>> > up
>> > before I start breaking things in creative ways.)
>> 
>>  Is this perhaps a recent change? I never noticed this until I upgraded.
>>  (from
>> -16 I think) I tried downgrading once after a reboot, however I got the same
>> results when i tried to restart it. But other servers I have, with older
>> revs,
>> don't have this problem. 
> 
> Its been like this forever. How big are your swaplog files in each of your
> cache dirs? Do you perodically rotate the logfiles? (squid -k rotate)
> 
> 
 Hi 
 The swaplog files are about 156 megs. Altho I have some servers that have
swaplogs that are 1.6 gigs but are fine as they the servers have never been
restarted.

 I have never run squid -k rotate. I have another server that just started
exibiting the same sort of behaviour of slowing down. I tried lowing the
available disk size to force it trim some files and did a squid -k rotate but
it was still slow. 

 It's getting to be kind of a drag having to contantly wipe out the cache every
few months when they get to a larger size. The disks are 146 Gig and are only
56% full. I am trying to keep lowering the alloted available cache size to see
if there is a sweet spot.

 How often should squid -k rotate be used. It seems like there are various
opinions on its usage and frequency.
 

  Nicole



> 
> adrian
> 
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support
> -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


--
 |\ __ /|   (`\
 | o_o  |__  ) )   
//  \\ 
  -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
--
 "The term "daemons" is a Judeo-Christian pejorative.
 Such processes will now be known as "spiritual guides"
  - Politicaly Correct UNIX Page





RE: [squid-users] Help Needed: Any suggestion on performance downgrade after enable Cache Digest?

2008-04-21 Thread Zhou, Bo(Bram)
Adrian,

I'm using RHE4 with kernel 2.6.9. I used top to collect CPU utilization and
didn't use oprofile but just installed. I will use it to do profiling in the
later testing session. Thanks for your reminder. 

Best Regards,
Bo Zhou

>> -Original Message-
>> From: Adrian Chadd [mailto:[EMAIL PROTECTED]
>> Sent: Monday, April 21, 2008 11:48 PM
>> To: Zhou, Bo(Bram)
>> Cc: squid-users@squid-cache.org
>> Subject: Re: [squid-users] Help Needed: Any suggestion on performance
>> downgrade after enable Cache Digest?
>> 
>> 
>> Which OS?
>> If Linux, did you start looking at the CPU use using oprofile?
>> 
>> 
>> Adrian
>> 
>> On Mon, Apr 21, 2008, Zhou, Bo(Bram) wrote:
>> > Hi,
>> >
>> > Recently I did some interesting performance testing on the Squid
configured
>> > with Cache Digest Enabled. The testing result shows that the Squid use
more
>> > than 20% CPU time than the Squid running without Cache Digest.
Following
>> are
>> > my detailed testing environment and configuration and result. Anyone
can
>> > give me some light on the possible reason will be greatly appreciated.
>> > Please also point out the possible configuration errors if any. Thanks
a
>> > lot.
>> >
>> > 1. Hardware configuration : HP DL380
>> > (1) Squid Server
>> > CPU: 2 Xeon 2.8GHz CPUs, each Xeon CPU has 2 Cores
>> > Memory size: 6G, Disk: 36G, NIC: 1000M
>> > (2) Client and Web Server : Dell Vostro200 running with Web Polygraph
3.1.5
>> >
>> > 2. Squid Configuration
>> > (1) 2 Squid instances are running on the same HP server, each using
same
>> IP
>> > address but different PORT, pure in memory cache
>> > Squid1 configuration:
>> > http_port 8081
>> > cache_mem 1024 MB
>> > cache_dir null /tmp
>> > cache_peer 192.168.10.2sibling   8082  0 proxy-only
>> > digest_generation on
>> > digest_bits_per_entry 5
>> > digest_rebuild_period 1 hour
>> > digest_swapout_chunk_size 4096 bytes
>> > digest_rebuild_chunk_percentage 10
>> >
>> > Squid2 configuration:
>> > http_port 8082
>> > cache_mem 1024 MB
>> > cache_dir null /tmp
>> > cache_peer 192.168.10.2sibling   8081  0 proxy-only
>> > digest_generation on
>> > digest_bits_per_entry 5
>> > digest_rebuild_period 1 hour
>> > digest_swapout_chunk_size 4096 bytes
>> > digest_rebuild_chunk_percentage 10
>> >
>> > 3. 2 Polygraph Clients are used to send HTTP requests to Squid
instances.
>> > Different client send request to different Squid instance. Each client
>> > configures 1000 users with 1.2 request/s, so totally each client send
1200
>> > requests/s.
>> >
>> > 4. Test result (Note: since 4 CPU used on the server, the total CPU
>> > utilization is 400%)
>> > (1) Running 2 Squid instances with Cache Digest Enabled, each handles
1200
>> > request/second:
>> > Each instance used ~95% CPU even during the time Squid didn't rebuild
the
>> > digest
>> >
>> > (2) Running 2 Squid instances with Cache Digest Enabled, one handles
1200
>> > request/second, one is idle(no traffic to it)
>> > The one with traffic has CPU utilization ~65%, the other one is idle
>> >
>> > (3) Running 2 Squid instances with Cache Digest Disabled, each handles
1200
>> > request/second:
>> > Each instance used ~75% CPU
>> >
>> >
>> > Best Regards,
>> > Bo Zhou
>> >
>> 
>> --
>> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support
>> -
>> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -





[squid-users] Rewrite http to https for owa.

2008-04-21 Thread Dwyer, Simon
Hey everyone,

I am starting to really get my squid server under control here :)

One last step to have it fully working is to rewrite address's coming in on
http to https.  This is for OWA.  I have tried to use squirm and have some
success.  What I need to do is redirect http://mail.domainname.com/  to
https://mail.domainname/com/owa.  For all reverse proxy requests.  Is there
an easier way to do this?  I have googled it without much success.

Cheers,

Simon


Re: [squid-users] Help Needed: Any suggestion on performance downgrade after enable Cache Digest?

2008-04-21 Thread Adrian Chadd
On Tue, Apr 22, 2008, Zhou, Bo(Bram) wrote:
> Adrian,
> 
> I'm using RHE4 with kernel 2.6.9. I used top to collect CPU utilization and
> didn't use oprofile but just installed. I will use it to do profiling in the
> later testing session. Thanks for your reminder. 

Make sure you also install the glibc library debug package.

Its also a good idea to graph/sample the top-level system stats - vmstat, 
netstat -in,
etc - I graph stuff on munin but munin's granularity is ~5 minutes and not near 
enough
to see transient events. This'll show things like context switch increases, 
syscall
count increases, traffic/packet rate increases, etc, which may be an aide in 
tracking
down what is going on.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Problem with Restarted Squid Stable 2.6_19 Add

2008-04-21 Thread Adrian Chadd
On Mon, Apr 21, 2008, Nicole wrote:

>  Hi 
>  The swaplog files are about 156 megs. Altho I have some servers that have
> swaplogs that are 1.6 gigs but are fine as they the servers have never been
> restarted.
> 
>  I have never run squid -k rotate. I have another server that just started
> exibiting the same sort of behaviour of slowing down. I tried lowing the
> available disk size to force it trim some files and did a squid -k rotate but
> it was still slow. 

Hm. Well, you should run squid -k rotate once a day.

>  It's getting to be kind of a drag having to contantly wipe out the cache 
> every
> few months when they get to a larger size. The disks are 146 Gig and are only
> 56% full. I am trying to keep lowering the alloted available cache size to see
> if there is a sweet spot.
> 
>  How often should squid -k rotate be used. It seems like there are various
> opinions on its usage and frequency.

Are you using AUFS on a recent FreeBSD (FreeBSD > 5.x) ?

I've built a 4 x 18 gig test cache here (not enough RAM atm to run more of the
disks :/) and the rebuild-from-swaplog is quite a bit faster than rebuild-from-
cache. Check cache.log and see if its rebuilding from swaplog, or from cache
(it'll say DIRTY.)

I'd start by looking at iostat to see if Squid is doing a decent amount of IO,
and vmstat / top to see if Squid has hit 100% CPU. The rebuilding logic happens
synchronously - it doesn't use the async io routines to do background 
processing.
I guess I should make it do that but that'll have to wait.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Rewrite http to https for owa.

2008-04-21 Thread Amos Jeffries

Dwyer, Simon wrote:

Hey everyone,

I am starting to really get my squid server under control here :)

One last step to have it fully working is to rewrite address's coming in on
http to https.  This is for OWA.  I have tried to use squirm and have some
success.  What I need to do is redirect http://mail.domainname.com/  to
https://mail.domainname/com/owa.  For all reverse proxy requests.  Is there
an easier way to do this?  I have googled it without much success.

Cheers,

Simon


Have you tried this:
http://wiki.squid-cache.org/ConfigExamples/SquidAndOutlookWebAccess

Maybe with a basic http_port listener instead of the https_port.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Re: Force cache reload for object from browser

2008-04-21 Thread Amos Jeffries

Paul Bryson wrote:

Henrik Nordstrom wrote:

Good question how to ask a browser to do a reload of a non-displayable
object...


Heck, it doesn't really even need to even be a browser (though that 
would be most universally useful).  I just need some way to tell the 
proxy to grab a new version of the file.




If you have access to an app that lets you set custom headers (curl, 
wget, squidclient, etc) you could try sending a request for the object 
with the header:


  Cache-Control: max-age=0, must-revalidate, proxy-revalidate

and hope that at least one of those mechanisms is available.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Does anyone know how to make https work?

2008-04-21 Thread Amos Jeffries
Brian Lu wrote:
> Hi All
> I meet a problem:when I use https to access the web pages,my IE always 
> show me:
> 1.If setuped cache_peer:
> 錯誤
> 欲連結之網址(URL)無法正確的傳回
> 
> 當嘗試傳回下面的網址(URL)時: 
> https://www.chb.com.tw/wcm/web/home/index.html
> 發生了下列的錯誤:
> Unsupported Request Method and Protocol
> 尚未支援的要求方式或通訊協定
> Squid does not support all request methods for all access protocols. For 
> example, you can not POST a Gopher request.
> 因為 Squid (網路快取程式)並未支援所有的連結要求方式在各式通訊協定上。 
> 比如說,你不能要求一個 GOPHER 的 POST 連結要求。
> 
> Generated Mon, 21 Apr 2008 05:22:30 GMT by proxy.seed.net.tw 
> (squid/2.5.STABLE11)
> 
> 2.If no cache_peer:
> ERROR
> The requested URL could not be retrieved
> 
> While trying to retrieve the URL: 
> https://www.chb.com.tw/wcm/web/home/index.html
> The following error was encountered:
> Connection to 210.65.204.245 Failed
> The system returned:
>(71) Protocol error
> The remote host or network may be down. Please try the request again.
> Your cache administrator is .
> 
> Generated Mon, 21 Apr 2008 05:18:30 GMT by 192.168.1.254 
> (squid/3.0.STABLE2)
> 
> My squid version:
> [EMAIL PROTECTED] ]# squid -v
> Squid Cache: Version 3.0.STABLE2
> configure options:  '--enable-ssl' '--enable-linux-netfilter' 
> '--enable-referer-log'
> 
> My squid.conf:

> http_port 3128 transparent
> https_port 3129 cert=/usr/local/squid/etc/cert.pem 
> key=/usr/local/squid/etc/key.pem transparent


HTTPS cannot be intercepted transparently in 3.0 or any 2.x

You need to have 3.1 with sslBump enabled for thatt.


> 
> Does anyone know how to make https work? thank you very much~
> 
> Best regards,
> Brian Lu

(sorry if my txt is garbled, thunderbird seems not to like unicode editing)

Amos
-- 
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: [squid-users] Chat Apps getting blocked

2008-04-21 Thread Amos Jeffries

g f wrote:

I have a question about your reply:
http_access deny CONNECT !SSL_ports
Shouldnt this deny access to all but SSL_ports 443 and 563?

but wouldnt this:
 acl Safe_ports port 1025-65535  # unregistered ports
 http_access deny !Safe_ports

allow access on port 5222 (normally default xmpp port).

I am curious if I understand the acls properly.


They are all run top-to-bottom with first-match-wins.

So the ...
  http_access deny !Safe_ports

... does not stop port 5222 access, merely lets it continue down to a 
later ACL check. Which in this case is ...


  http_access deny CONNECT !SSL_Ports

... which matches and denies it (CONNECT is being done and 5222 is not 
in SSL_Ports)


Amos



Thanks.




On Mon, Apr 21, 2008 at 8:13 AM, Amos Jeffries <[EMAIL PROTECTED] 
> wrote:


Odhiambo Washington wrote:

Hello List,

I copycat(ed) a squid.conf from this list a few days ago and did
minimal config mods just to allow my network to use it. It works
great
with youtube caching, but stranegly, it blocks MSN/Yahoo chats,
but I
sincerely cannot see where this is happening. The file can be access
from the following URL:

https://212.22.160.35/~wash/squid.conf.txt

(I use a self-signed certificate, so please just accept it)

I get the following in the access log:

1208510066.248   7255 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.110.28:1863  - NONE/- text/html
1208510066.726   7850 192.168.0.150 
TCP_DENIED/403 1422 CONNECT
207.46.110.89:1863  - NONE/- text/html
1208510100.571847 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.110.94:1863  - NONE/- text/html
1208510119.339 28 192.168.0.150 
TCP_DENIED/403 1422 CONNECT
207.46.110.94:1863  - NONE/- text/html
1208510173.114853 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.108.13:1863  - NONE/- text/html
1208510216.270668 192.168.0.150 
TCP_DENIED/403 1422 CONNECT
207.46.108.85:1863  - NONE/- text/html
1208510300.314852 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.108.97:1863  - NONE/- text/html
1208510347.723853 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.108.86:1863  - NONE/- text/html
1208510371.584823 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.108.66:1863  - NONE/- text/html
1208510408.981 20 192.168.0.150 
TCP_DENIED/403 1422 CONNECT
207.46.108.97:1863  - NONE/- text/html
1208510413.535   1673 192.168.0.106 
TCP_DENIED/403 1422 CONNECT
207.46.108.93:1863  - NONE/- text/html
1208510488.270 19 192.168.0.106 
TCP_DENIED/403 1438 CONNECT
messenger.hotmail.com:1863  -
NONE/- text/html
1208510609.843  0 192.168.0.117 
TCP_DENIED/403 1426 CONNECT
talk.google.com:5222  - NONE/-
text/html
1208510609.844  0 192.168.0.117 
TCP_DENIED/403 1430 CONNECT
scs.msg.yahoo.com:5050  - NONE/-
text/html
1208510616.495  0 192.168.0.117 
TCP_DENIED/403 1426 CONNECT
talk.google.com:5222  - NONE/-
text/html
1208510617.057  1 192.168.0.117 
TCP_DENIED/403 1430 CONNECT
scs.msg.yahoo.com:5050  - NONE/-
text/html
1208510637.734 20 192.168.0.106 
TCP_DENIED/403 1438 CONNECT
messenger.hotmail.com:1863  -
NONE/- text/html
1208510643.865 31 192.168.0.106 
TCP_DENIED/403 1438 CONNECT
messenger.hotmail.com:1863  -
NONE/- text/html
1208510676.014  0 192.168.0.117 
TCP_DENIED/403 1430 CONNECT
scs.msg.yahoo.com:5050  - NONE/-
text/html




Where in the acls is th