[squid-users] squid-3.1.0.9 - error directory not created automatically

2009-07-01 Thread Zeller, Jan
dear list,

I compiled squid-3.1.0.9 like this :

$ squid -v
Squid Cache: Version 3.1.0.9
configure options:  '--prefix=/opt/squid-3.1.0.9' '--enable-icap-client' 
'--enable-ssl' '--enable-linux-netfilter' '--enable-http-violations' 
'--with-filedescriptors=32768' '--with-pthreads' '--disable-ipv6' 
--with-squid=/usr/local/src/squid-3.1.0.9 --enable-ltdl-convenience

Unfortunately there is no 'error' directory created !? Why ? squid-3.1.0.7 
created this directory automatically.
Should I explicitly download the language pack from 
http://www.squid-cache.org/Versions/langpack/ ?

kind regards,

Mit freundlichen Grüssen
---
Jan Zeller
Informatikdienste 
Universität Bern



Re: [squid-users] Antwort: Re: [squid-users] memory usage for squid-3.0.STABLE15

2009-07-01 Thread Matus UHLAR - fantomas
On 30.06.09 13:13, martin.pichlma...@continental-corporation.com wrote:
 I checked -- cached objects are not re-checked, at least not with two or
 three hours.
 
 But the memory usage is higher still without icap while the cache is still
 filling -- but this may due to the fact that I configured squid to cache
 objects only up to 1 MB and icap scans larger objects, too.
 
 Additionally squid does not know the icap scan limit, therefore every file
 will be sent to the ICAP server. So the higher memory usage will be
 probably the need of caching large objects, too, at least until ICAP has
 them scanned. Your thought with the two memory buffers may be another
 reason.
 
 OK, thank you, now I understand more clearly the extensive memory usage
 with ICAP. I will have to rethink whether ICAP is really the best way for
 what I want.

Well, I cache bigger files than 1MB, my maximum_object_size is 32 MB now.
with LFUDA policy I found this to have better byte hit rate...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
A day without sunshine is like, night.


Re: [squid-users] squid becomes very slow during peak hours

2009-07-01 Thread nyoman karna

I use Squid 2.7.STABLE4 on FreeBSD7.1
with these settings in squid.conf 
(explained by Amos about 3 months ago):

maximum_object_size 16384 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 512 KB
cache_replacement_policy lru
memory_replacement_policy lru
cache_dir aufs /webcache 163840 256 256

and it works fine with hundreds of users
(HIT ratio is above 75%)

and yes, I also change my dedicated harddrive 
(for webcache) from SATA to IDE, since my problem was
the Squid is rejecting all request when it's so busy
calculating the cached object.

hope it helps


Nyoman Bogi Aditya Karna
   IM Telkom
 http://imtelkom.ac.id




--- On Tue, 6/30/09, Chris Robertson crobert...@gci.net wrote:

 From: Chris Robertson crobert...@gci.net
 Subject: Re: [squid-users] squid becomes very slow during peak hours
 To: squid-users@squid-cache.org
 Date: Tuesday, June 30, 2009, 5:25 PM
 goody goody wrote:
  Hi there,
 
  I am running squid 2.5 on freebsd 7,
 
 As Adrian said, upgrade.  2.6 (and 2.7) support kqueue
 under FreeBSD.
 
   and my squid box respond very slow during peak
 hours. my squid machine have twin dual core processors, 4
 ram and following hdds.
 
  Filesystem     Size   
 Used   Avail Capacity  Mounted on
  /dev/da0s1a    9.7G    241M 
   8.7G     3%    /
  devfs          1.0K 
   1.0K     
 0B   100%    /dev
  /dev/da0s1f     73G 
    35G     32G 
   52%    /cache1
  /dev/da0s1g     73G   
 2.0G     65G 
    3%    /cache2
  /dev/da0s1e     39G   
 2.5G     33G 
    7%    /usr
  /dev/da0s1d     58G   
 6.4G     47G    12% 
   /var
 
 
  below are the status and settings i have done. i need
 further guidance to  improve the box.
 
  last pid: 50046;  load averages: 
 1.02,  1.07,  1.02       
                
                
                 up 
 
  7+20:35:29  15:21:42
  26 processes:  2 running, 24 sleeping
  CPU states: 25.4% user,  0.0% nice,  1.3%
 system,  0.8% interrupt, 72.5% idle
  Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache,
 112M Buf, 3708K Free
  Swap: 4096M Total, 20K Used, 4096M Free
 
    PID USERNAME      THR
 PRI NICE   SIZE    RES STATE 
 C   TIME   WCPU COMMAND
  49819 sbt    1 105   
 0   360M   351M
 CPU3   3  92:43 98.14% squid
    487 root       
     1  96    0  4372K 
 2052K select 0  57:00  3.47% natd
    646 root       
     1  96    0 16032K 12192K select
 3  54:28  0.00% snmpd
    
 SNIP
  pxy# iostat
        tty     
        da0     
       pass0         
    cpu
   tin tout  KB/t tps 
 MB/s   KB/t tps  MB/s  us ni sy in
 id
     0  126
 12.79   5 
 0.06   0.00   0 
 0.00   4  0  1  0 95
 
  pxy# vmstat
   procs      memory   
   page             
       disks 
    faults      cpu
   r b w     avm   
 fre   flt  re  pi  po 
   fr  sr da0
 pa0   in   sy   cs
 us sy id
   1 3 0  458044 103268   
 12   0   0   0 
  
 30   5   0   0 
 273 1721 2553  4  1 95
    
 
 Those statistics show wildly different utilization. 
 The first (top, I 
 assume) shows 75% idle (or a whole CPU in use).  The
 next two show 95% 
 idle (in effect, one CPU 20% used).  How close (in
 time) were the 
 statistics gathered?
 
 
  some lines from squid.conf
  cache_mem 256 MB
  cache_replacement_policy heap LFUDA
  memory_replacement_policy heap GDSF
 
  cache_swap_low 80
  cache_swap_high 90
 
  cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
  cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64
 
  cache_log /var/log/squid25/cache.log
  cache_access_log /var/log/squid25/access.log
  cache_store_log none
 
  half_closed_clients off
  maximum_object_size 1024 KB 
    
  if anyother info required, i shall provide.
    
 
 The types (and number) of ACLs in use would be of interest
 as well.
 
  Regards,
  .Goody.
    
 
 Chris
 
 





[squid-users] error compiling squid 3.0 stable 16

2009-07-01 Thread Gontzal
Hello everybody!

I've a problem compiling squid 3.0 stable 16 on a opensuse 10.3 box.
These are my configure options:

./configure --prefix=/usr --sysconfdir=/etc/squid --bindir=/usr/sbin
--sbindir=/usr/sbin --localstatedir=/var --libexecdir=/usr/sbin
--datadir=/usr/share/squid --libdir=/usr/lib --with-dl
--sharedstatedir=/var/squid --enable-storeio=aufs,diskd,null,ufs
--enable-disk-io=AIO,Blocking,DiskDaemon,DiskThreads
--enable-removal-policies=heap,lru --enable-icmp --enable-delay-pools
--enable-http-violations --enable-esi --enable-icap-client
--enable-useragent-log --enable-referer-log --enable-kill-parent-hack
--enable-snmp --enable-arp-acl --enable-htcp --enable-ssl
--enable-forw-via-db --enable-cache-digests --enable-poll
--enable-linux-netfilter --with-large-files --enable-underscores
--enable-auth=basic,digest,ntlm,negotiate
--enable-basic-auth-helpers=DB,LDAP,MSNT,NCSA,POP3,SASL,SMB,YP,getpwnam,multi-domain-NTLM,squid_radius_auth
--enable-ntlm-auth-helpers=SMB,no_check,fakeauth
--enable-negotiate-auth-helpers=squid_kerb_auth
--enable-digest-auth-helpers=eDirectory,ldap,password
--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group
--enable-ntlm-fail-open --enable-stacktraces
--enable-x-accelerator-vary --with-default-user=squid

No problem when configuring. And this is the error when doing make:

Making all in squid_kerb_auth
make[3]: Entering directory
`/tmp/squid-3.0.STABLE16/helpers/negotiate_auth/squid_kerb_auth'
gcc -DHAVE_CONFIG_H -I. -I../../../include   -I./spnegohelp -I.   -m32
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -g -O2 -MT
squid_kerb_auth.o -MD -MP -MF .deps/squid_kerb_auth.Tpo -c -o
squid_kerb_auth.o squid_kerb_auth.c
squid_kerb_auth.c:76:18: error: krb5.h: No such file or directory
squid_kerb_auth.c:77: error: expected â)â before âmajor_statusâ
squid_kerb_auth.c:133: error: expected â)â before âmajor_statusâ
squid_kerb_auth.c: In function âmainâ:
squid_kerb_auth.c:197: error: âOM_uint32â undeclared (first use in
this function)
squid_kerb_auth.c:197: error: (Each undeclared identifier is reported only once
squid_kerb_auth.c:197: error: for each function it appears in.)
squid_kerb_auth.c:197: error: expected â;â before âret_flagsâ
squid_kerb_auth.c:201: error: expected â;â before âmajor_statusâ
squid_kerb_auth.c:202: error: âgss_ctx_id_tâ undeclared (first use in
this function)
squid_kerb_auth.c:202: error: expected â;â before âgss_contextâ
squid_kerb_auth.c:203: error: âgss_name_tâ undeclared (first use in
this function)
squid_kerb_auth.c:203: error: expected â;â before âclient_nameâ
squid_kerb_auth.c:204: error: expected â;â before âserver_nameâ
squid_kerb_auth.c:205: error: âgss_cred_id_tâ undeclared (first use in
this function)
squid_kerb_auth.c:205: error: expected â;â before âserver_credsâ
squid_kerb_auth.c:206: error: expected â;â before âdelegated_credâ
squid_kerb_auth.c:207: error: âgss_buffer_descâ undeclared (first use
in this function)
squid_kerb_auth.c:207: error: expected â;â before âserviceâ
squid_kerb_auth.c:208: error: expected â;â before âinput_tokenâ
squid_kerb_auth.c:209: error: expected â;â before âoutput_tokenâ
squid_kerb_auth.c:245: error: âserviceâ undeclared (first use in this function)
squid_kerb_auth.c:286: warning: the address of âbufâ will always
evaluate as âtrueâ
squid_kerb_auth.c:303: warning: implicit declaration of function
âgss_release_bufferâ
squid_kerb_auth.c:303: error: âminor_statusâ undeclared (first use in
this function)
squid_kerb_auth.c:303: error: âinput_tokenâ undeclared (first use in
this function)
squid_kerb_auth.c:304: error: âoutput_tokenâ undeclared (first use in
this function)
squid_kerb_auth.c:306: warning: implicit declaration of function
âgss_release_credâ
squid_kerb_auth.c:306: error: âserver_credsâ undeclared (first use in
this function)
squid_kerb_auth.c:307: error: âdelegated_credâ undeclared (first use
in this function)
squid_kerb_auth.c:308: warning: implicit declaration of function
âgss_release_nameâ
squid_kerb_auth.c:308: error: âserver_nameâ undeclared (first use in
this function)
squid_kerb_auth.c:309: error: âclient_nameâ undeclared (first use in
this function)
squid_kerb_auth.c:310: warning: implicit declaration of function
âgss_delete_sec_contextâ
squid_kerb_auth.c:310: error: âgss_contextâ undeclared (first use in
this function)
squid_kerb_auth.c:313: error: âspnego_flagâ undeclared (first use in
this function)
squid_kerb_auth.c:341: error: âGSS_C_NO_CONTEXTâ undeclared (first use
in this function)
squid_kerb_auth.c:400: error: âmajor_statusâ undeclared (first use in
this function)
squid_kerb_auth.c:400: warning: implicit declaration of function
âgss_import_nameâ
squid_kerb_auth.c:401: error: âgss_OIDâ undeclared (first use in this function)
squid_kerb_auth.c:401: error: expected â)â before âGSS_C_NULL_OIDâ
squid_kerb_auth.c:404: error: âGSS_C_NO_NAMEâ undeclared (first use in
this function)
squid_kerb_auth.c:405: error: âGSS_S_COMPLETEâ undeclared (first use
in this function)

AW: [squid-users] error compiling squid 3.0 stable 16

2009-07-01 Thread Zeller, Jan
Hi,

it seems that you've no krb.h

squid_kerb_auth.c:76:18: error: krb5.h: No such file or directory

kind regards,

Jan

-Ursprüngliche Nachricht-
Von: Gontzal [mailto:gontz...@gmail.com] 
Gesendet: Mittwoch, 1. Juli 2009 11:42
An: Squid-users
Betreff: [squid-users] error compiling squid 3.0 stable 16

Hello everybody!

I've a problem compiling squid 3.0 stable 16 on a opensuse 10.3 box.
These are my configure options:

./configure --prefix=/usr --sysconfdir=/etc/squid --bindir=/usr/sbin
--sbindir=/usr/sbin --localstatedir=/var --libexecdir=/usr/sbin
--datadir=/usr/share/squid --libdir=/usr/lib --with-dl
--sharedstatedir=/var/squid --enable-storeio=aufs,diskd,null,ufs
--enable-disk-io=AIO,Blocking,DiskDaemon,DiskThreads
--enable-removal-policies=heap,lru --enable-icmp --enable-delay-pools
--enable-http-violations --enable-esi --enable-icap-client
--enable-useragent-log --enable-referer-log --enable-kill-parent-hack
--enable-snmp --enable-arp-acl --enable-htcp --enable-ssl
--enable-forw-via-db --enable-cache-digests --enable-poll
--enable-linux-netfilter --with-large-files --enable-underscores
--enable-auth=basic,digest,ntlm,negotiate
--enable-basic-auth-helpers=DB,LDAP,MSNT,NCSA,POP3,SASL,SMB,YP,getpwnam,multi-domain-NTLM,squid_radius_auth
--enable-ntlm-auth-helpers=SMB,no_check,fakeauth
--enable-negotiate-auth-helpers=squid_kerb_auth
--enable-digest-auth-helpers=eDirectory,ldap,password
--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group
--enable-ntlm-fail-open --enable-stacktraces
--enable-x-accelerator-vary --with-default-user=squid

No problem when configuring. And this is the error when doing make:

Making all in squid_kerb_auth
make[3]: Entering directory
`/tmp/squid-3.0.STABLE16/helpers/negotiate_auth/squid_kerb_auth'
gcc -DHAVE_CONFIG_H -I. -I../../../include   -I./spnegohelp -I.   -m32
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -g -O2 -MT
squid_kerb_auth.o -MD -MP -MF .deps/squid_kerb_auth.Tpo -c -o
squid_kerb_auth.o squid_kerb_auth.c
squid_kerb_auth.c:76:18: error: krb5.h: No such file or directory
squid_kerb_auth.c:77: error: expected â)â before âmajor_statusâ
squid_kerb_auth.c:133: error: expected â)â before âmajor_statusâ
squid_kerb_auth.c: In function âmainâ:
squid_kerb_auth.c:197: error: âOM_uint32â undeclared (first use in
this function)
squid_kerb_auth.c:197: error: (Each undeclared identifier is reported only once
squid_kerb_auth.c:197: error: for each function it appears in.)
squid_kerb_auth.c:197: error: expected â;â before âret_flagsâ
squid_kerb_auth.c:201: error: expected â;â before âmajor_statusâ
squid_kerb_auth.c:202: error: âgss_ctx_id_tâ undeclared (first use in
this function)
squid_kerb_auth.c:202: error: expected â;â before âgss_contextâ
squid_kerb_auth.c:203: error: âgss_name_tâ undeclared (first use in
this function)
squid_kerb_auth.c:203: error: expected â;â before âclient_nameâ
squid_kerb_auth.c:204: error: expected â;â before âserver_nameâ
squid_kerb_auth.c:205: error: âgss_cred_id_tâ undeclared (first use in
this function)
squid_kerb_auth.c:205: error: expected â;â before âserver_credsâ
squid_kerb_auth.c:206: error: expected â;â before âdelegated_credâ
squid_kerb_auth.c:207: error: âgss_buffer_descâ undeclared (first use
in this function)
squid_kerb_auth.c:207: error: expected â;â before âserviceâ
squid_kerb_auth.c:208: error: expected â;â before âinput_tokenâ
squid_kerb_auth.c:209: error: expected â;â before âoutput_tokenâ
squid_kerb_auth.c:245: error: âserviceâ undeclared (first use in this function)
squid_kerb_auth.c:286: warning: the address of âbufâ will always
evaluate as âtrueâ
squid_kerb_auth.c:303: warning: implicit declaration of function
âgss_release_bufferâ
squid_kerb_auth.c:303: error: âminor_statusâ undeclared (first use in
this function)
squid_kerb_auth.c:303: error: âinput_tokenâ undeclared (first use in
this function)
squid_kerb_auth.c:304: error: âoutput_tokenâ undeclared (first use in
this function)
squid_kerb_auth.c:306: warning: implicit declaration of function
âgss_release_credâ
squid_kerb_auth.c:306: error: âserver_credsâ undeclared (first use in
this function)
squid_kerb_auth.c:307: error: âdelegated_credâ undeclared (first use
in this function)
squid_kerb_auth.c:308: warning: implicit declaration of function
âgss_release_nameâ
squid_kerb_auth.c:308: error: âserver_nameâ undeclared (first use in
this function)
squid_kerb_auth.c:309: error: âclient_nameâ undeclared (first use in
this function)
squid_kerb_auth.c:310: warning: implicit declaration of function
âgss_delete_sec_contextâ
squid_kerb_auth.c:310: error: âgss_contextâ undeclared (first use in
this function)
squid_kerb_auth.c:313: error: âspnego_flagâ undeclared (first use in
this function)
squid_kerb_auth.c:341: error: âGSS_C_NO_CONTEXTâ undeclared (first use
in this function)
squid_kerb_auth.c:400: error: âmajor_statusâ undeclared (first use in
this function)
squid_kerb_auth.c:400: warning: implicit declaration of function
âgss_import_nameâ

[squid-users] store.log suddenly filling very quickly with question marks

2009-07-01 Thread Andrew Beverley

Hi,

In the last few days, my store.log has suddenly started filling up  
with entries such as the following:


1246200847.769 RELEASE -1  74D41A19D1E64DB54978AD277BA12FC7
? ? ? ? ?/? ?/? ? ?


Despite log rotation, the log file has hit 2GB and stopped Squid from  
working as the file is too big to write to.


I have disabled the store.log file, but I am concerned that something  
is not right. Could there be something nasty on the network?


I found a similar post here:

http://marc.info/?l=squid-usersm=119006456530768w=2

But the answer of that thread was to upgrade. I am using 3.0.STABLE8  
on Debian 5.


So in summary, I have found a workaround, but I am concerned that  
something is not right. Should I be concerned?


Many thanks in advance,

Andy





Re: Fw: [squid-users] NTLM Auth and Java applets (Any update)

2009-07-01 Thread Gontzal
Hi,

I've recompiled squid, now 3.0 stable 16 on a non-production opensuse
10.3 server with the --enable-http-violations option
I've added the following lines to my squid.conf file:

acl Java browser Java/1.4 Java/1.5 Java/1.6

header_access Proxy-Authenticate deny Java
header_replace Proxy-Authenticate Basic realm=

The header tags are before the http_access tags, I don't know if it is
correct. I've also disable the option http_access allow Java

Squid runs correctly but when i check for java, it doesn't work, it
don't ask for basic auth and doesn't show the java applet page.

On the access log it shows lines like this one:

(01/Jul 12:46:01) (TCP_DENIED/407/NONE) (172.28.3.186=172.28.129.250)
(tp.seg-social.es:443) text/html-2226bytes 1ms

I've changed the identity of my browser from firefox to java and it
browses using ntlm auth instead of asking for user/passwd

Where can be the problem?

Thanks again!

2009/6/30 Amos Jeffries squ...@treenet.co.nz:


 I agree this does look like a good clean solution. I'll look at
 implementing a small on/off toggle to do only this change for safer Java
 bypass. May not be very soon though. What version of Squid are you using?

 Meanwhile yes, you do have to add the option to the ./configure options and
 re-compile = re-install Squid.
 The install process if done right should not alter existing squid.conf and
 be a simple drop-in to the existing install. But a backup is worth doing
 just in case.
 If currently using a packages Squid, you may want to contact the package
 maintainer for any help on the configure and install steps.

 Amos

 On Mon, 29 Jun 2009 10:40:06 +0200, Gontzal gontz...@gmail.com wrote:
 Hi Kevin,


 Thanks for your post, I think is a very good solution to the Java
 security
 hole.

 I've seen that for using header_access and header_replace you need to
 compile with the --enable-http-violations. My question is, if I
 compiled squid without this option, is there any way to add this
 feature or I've to compile entire squid again? In this case, should I
 save my configuration files?

 Where should I put these lines, after acls?

 Thanks again

 Gontzal

 2009/6/27 Kevin Blackwell akblack...@gmail.com:
 This what your looking for?

 acl javaNtlmFix browser -i java
 acl javaConnect method CONNECT
 header_access Proxy-Authenticate deny javaNtlmFix javaConnect
 header_replace Proxy-Authenticate Basic realm=Internet

 now only https/ssl access from java will have basic auth and so a
 password dialog.
 normal http access will work with ntlm challenge response.

 thanxs again

 markus

-Ursprüngliche Nachricht-
Von: Rietzler, Markus (Firma Rietzler Software / RZF)
Gesendet: Dienstag, 16. Oktober 2007 18:17
An: 'Chris Robertson'; squid-users@squid-cache.org
Betreff: AW: [squid-users] force basic NTLM-auth for certain
clients/urls

thanxs for that hint - it worked as a fix

i have addes this to my squid.conf

acl javaNtlmFix browser -i java
header_access Proxy-Authenticate deny javaNtlmFix
header_replace Proxy-Authenticate Basic realm=Internet Access

now any java-client (java web start, java or applets in
browser) will only see the basic auth scheme.
a username/password dialog pops up and i have to enter my credentials.

any other client (firefox, ie) still se both NTLM and Basic
scheme and use NTLM challenge response to authenticate...

the little drawback is, that there is that little nasty dialog
but connection via proxy is working...

thanxs

markus


 On Sat, May 9, 2009 at 12:13 AM, Nitin
 Bhadaurianitin.bhadau...@tetrain.com wrote:
 Dear All,

 Please reply if we have some solution for the problem. I am stuck with
 the
 problem my server is live and i can't afforded to allow the java sites
 to
 unauthorized users in the network.

 Regards,
 Nitin B.


 Nitin Bhadauria wrote:

 Dear All,


 I have the same problem ..

 Everytime a browser proxying through squid tries to load a secure java
 applet, it comes up with a red x where the java applet should be.


 So I have bybass those sites for authentication, But the problem is
 users
 how don't have permission to access internet they are also able to
 access
 those sites.

 Please update if we had find any other solution for the problem.

 Thanks in advance for any reply.

 Regards,
 Nitin Bhadauria










Re: [squid-users] squid becomes very slow during peak hours

2009-07-01 Thread goody goody

Thanks for replies,

1. i have tried squid 3.0 stable 14 for few weeks but the problems were there 
and performance issues was also severe. as we had previously 2.5 stable 10 
running that's why i reverted to it temporarily. further i have squid 3.0/14 in 
place as i have install 2.5 in separate directry and i can squid 3.0/14 run it 
anytime. i will also welcome if you tell me the most stable version of squid. 

2. secondly we are using RAID 5 and have very powerfull machine at present as 
compared to previous one, and previous was working good with the same amount of 
traffic and less powerfull system.

3. thirdly i have gigabit network card but yes i have 100 mb ethernet channel, 
but as defined in step 2 same link was working superb in previous setup.

4. i could not get chris robertson question regarding processors, i have two 
dual core xeon processors(3.2 ghz) and i captured stats at peak hours when 
performance was degraded.


So what should i do???

Regards,

--- On Wed, 7/1/09, Chris Robertson crobert...@gci.net wrote:

 From: Chris Robertson crobert...@gci.net
 Subject: Re: [squid-users] squid becomes very slow during peak hours
 To: squid-users@squid-cache.org
 Date: Wednesday, July 1, 2009, 2:25 AM
 goody goody wrote:
  Hi there,
 
  I am running squid 2.5 on freebsd 7,
 
 As Adrian said, upgrade.  2.6 (and 2.7) support kqueue
 under FreeBSD.
 
   and my squid box respond very slow during peak
 hours. my squid machine have twin dual core processors, 4
 ram and following hdds.
 
  Filesystem     Size   
 Used   Avail Capacity  Mounted on
  /dev/da0s1a    9.7G    241M 
   8.7G     3%    /
  devfs          1.0K 
   1.0K     
 0B   100%    /dev
  /dev/da0s1f     73G 
    35G     32G 
   52%    /cache1
  /dev/da0s1g     73G   
 2.0G     65G 
    3%    /cache2
  /dev/da0s1e     39G   
 2.5G     33G 
    7%    /usr
  /dev/da0s1d     58G   
 6.4G     47G    12% 
   /var
 
 
  below are the status and settings i have done. i need
 further guidance to  improve the box.
 
  last pid: 50046;  load averages: 
 1.02,  1.07,  1.02       
                
                
                 up 
 
  7+20:35:29  15:21:42
  26 processes:  2 running, 24 sleeping
  CPU states: 25.4% user,  0.0% nice,  1.3%
 system,  0.8% interrupt, 72.5% idle
  Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache,
 112M Buf, 3708K Free
  Swap: 4096M Total, 20K Used, 4096M Free
 
    PID USERNAME      THR
 PRI NICE   SIZE    RES STATE 
 C   TIME   WCPU COMMAND
  49819 sbt    1 105   
 0   360M   351M
 CPU3   3  92:43 98.14% squid
    487 root       
     1  96    0  4372K 
 2052K select 0  57:00  3.47% natd
    646 root       
     1  96    0 16032K 12192K select
 3  54:28  0.00% snmpd
    
 SNIP
  pxy# iostat
        tty     
        da0     
       pass0         
    cpu
   tin tout  KB/t tps 
 MB/s   KB/t tps  MB/s  us ni sy in
 id
     0  126
 12.79   5 
 0.06   0.00   0 
 0.00   4  0  1  0 95
 
  pxy# vmstat
   procs      memory   
   page             
       disks 
    faults      cpu
   r b w     avm   
 fre   flt  re  pi  po 
   fr  sr da0
 pa0   in   sy   cs
 us sy id
   1 3 0  458044 103268   
 12   0   0   0 
  
 30   5   0   0 
 273 1721 2553  4  1 95
    
 
 Those statistics show wildly different utilization. 
 The first (top, I 
 assume) shows 75% idle (or a whole CPU in use).  The
 next two show 95% 
 idle (in effect, one CPU 20% used).  How close (in
 time) were the 
 statistics gathered?
 
 
  some lines from squid.conf
  cache_mem 256 MB
  cache_replacement_policy heap LFUDA
  memory_replacement_policy heap GDSF
 
  cache_swap_low 80
  cache_swap_high 90
 
  cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
  cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64
 
  cache_log /var/log/squid25/cache.log
  cache_access_log /var/log/squid25/access.log
  cache_store_log none
 
  half_closed_clients off
  maximum_object_size 1024 KB 
    
  if anyother info required, i shall provide.
    
 
 The types (and number) of ACLs in use would be of interest
 as well.
 
  Regards,
  .Goody.
    
 
 Chris
 
 





[squid-users] [squid 2.7 stable 3][ubuntu 9.04][shared usage with two proxies]

2009-07-01 Thread Volker Jahns
Hi all.

I don't know which headline is the best one, so sorry for probably a bad
one.

What is the problem?

I want work with two (or more) squid proxies called P1 and P2 (perhaps two
or three P2 kind server). But first it should work with simply one P2
server.

P1: The task for P1 is authorization in our HQ for all users. Even for the
users in the branches.

P2: The task for P2 is cashing only at the internet interface somewhere in
the network after P1 accept the authorization in the HQ also for the users
in the branches and moves the request direct to the requesting client.

Every branch is at least one network plus the HQ network. For reducing the
traffic in the whole network I want, that P2 sends the requested pages from
the internet or its own cache to branch and not via P1 in the HQ. The
background: no senseless traffic at the HQ gateway.

A short data flow the usual way:

Branch client x -- authorization HQ (P1) -- forward request -- internet
gateway (P2) -- get request internet or cache (P2) -- deliver page -- P1
-- deliver page client x Branch

A short data flow example how it should work:

Branch client x -- authorization HQ (P1) -- forward request -- internet
gateway (P2) -- get request internet or cache (P2) -- deliver page --
client x Branch

The difference seems to be small but it is important.

First question for general: does it work?

Second question if it works: how do I configure this?

Until now I have P1 configured as sibling with a second cache (P2) as
parent, acting as origin server with no via and no http11. The authorization
on P1 works and P2 try to get the requested page. But in fact on the way
from P1 to P2 the URI header information (simply a / was left) is lost and
in the end it does not working jet.

Hope someone could help.
Volker



RE: [squid-users] Strange problem with sibling squids in accelerator mode

2009-07-01 Thread Lu, Roy
It seems that only if the http request is sent from same server where
the squid instance has the object, then its sibling will retrieve it
from this server. If the request is sent from a different box, no ICP
communication goes on. Why?

-Original Message-
From: Lu, Roy [mailto:r...@facorelogic.com] 
Sent: Tuesday, June 30, 2009 5:26 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Strange problem with sibling squids in
accelerator mode

I am using version 3.0 stable 16.

-Original Message-
From: Lu, Roy [mailto:r...@facorelogic.com] 
Sent: Tuesday, June 30, 2009 5:24 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Strange problem with sibling squids in
accelerator mode

Hi,

I encountered a strange problem in using sibling squids as accelerators.
I have two accelerator squids, A and B (on two different boxes). They
are set up as sibling cache peers which both point to the same parent
cache_peer origin content server. I used the following commands to run
my test:

1. Load an object into A:

%squidclient -h host.name.of.A URL

2. Purge the object from B:

%squidclient -h host.name.of.B -m PURGE URL

3. Double check to make sure A has the object and B does not:

%squidclient -h host.name.of.A -m HEAD -H Cache-Control:
only-if-cached\n URL
Resulted in TCP_MEM_HIT

%squidclient -h host.name.of.B -m HEAD -H Cache-Control:
only-if-cached\n URL
Resulted in TCP_MISS

4. Request the object from B:

%squidclient -h host.name.of.B URL

Now the strange problem comes in. If I run the last step on the box A,
the ICP communication occurs, and in A's log I see UDP_HIT and
TCP_MEM_HIT, and in B's log I see TCP_MISS and SIBLING_HIT. However, if
I run the last step in B, then there is no ICP communication, squid B
simply goes to the parent origin server to get the object (in B's log
FIRST_UP_PARENT and nothing in A's log). When I run the same test with
squidclient on a third machine, the result is negative too. So it seems
that only if I run the squidclient utility on the same box where the
cached object is, then its sibling cache will retrieve the object from
this box.

The configuration for Squid A is:

#===

# ACL changes
#===

# acl for purge method
acl acl_purge method purge

# acl for origin app server
acl acl_gpl_app_servers dstdomain vmprodcagpcna04.firstamdata.net

# acl for cache peer squid server
acl acl_gpl_cache_sibling src host.name.of.A

#===

# http_access changes
#===

# allow purge method from localhost or sibling
http_access allow acl_purge localhost
http_access allow acl_purge acl_gpl_cache_sibling
http_access deny acl_purge

# allow http access to app servers and from cache sibling
http_access allow acl_gpl_app_servers
http_access allow acl_gpl_cache_sibling

#===

# icp_access changes
#===

# allow icp queries from cache sibling
icp_access allow acl_gpl_cache_sibling

#===

# cache_peer changes
#===

cache_peer vmprodcagpcna04.firstamdata.net parent 7533 0 no-query
originserver name=cp_gpl_app_servers
cache_peer host.name.of.A sibling 3128 3130 name=cp_gpl_cache_sibling
proxy-only

#===

# cache_peer_access changes
#===

# Allow peer connection to the origin app server and sibling cache peer
cache_peer_access cp_gpl_app_servers allow acl_gpl_app_servers
cache_peer_access cp_gpl_cache_sibling allow acl_gpl_cache_sibling


Configuration for B is almost identical except the host.name.of.A in the
acl and cache_peer tags is switched with B's. 

Can someone point out what might be the problem here?

Thanks.
Roy

** 
This message may contain confidential or proprietary information
intended only for the use of the 
addressee(s) named above or may contain information that is legally
privileged. If you are 
not the intended addressee, or the person responsible for delivering it
to the intended addressee, 
you are hereby notified that reading, disseminating, distributing or
copying this message is strictly 
prohibited. If you have received this message by mistake, please

Re: [squid-users] squid becomes very slow during peak hours

2009-07-01 Thread Quin Guin

Hi,

  I am running both 2.7-STABLE6  3.1.0.8 versions on
more then a few servers. On avg I am pushing about 230+ TPS and at Peak
usage I don't see delays from SQUID unless you come across a content
server that is having issues or isn't cache freindly, DNS issues will
cause problems... but I can't stress enough how fast you should ditch
RAID all together and go with a JBOD and you will see a big improvement
now matter what OS you are running.


Quin


- Original Message 
From: goody goody think...@yahoo.com
To: squid-users@squid-cache.org
Cc: Chris Robertson crobert...@gci.net; balique8...@yahoo.com; 
hen...@henriknordstrom.net; Amos jafferies Squid GURU squ...@treenet.co.nz
Sent: Wednesday, July 1, 2009 7:07:59 AM
Subject: Re: [squid-users] squid becomes very slow during peak hours


Thanks for replies,

1. i have tried squid 3.0 stable 14 for few weeks but the problems were there 
and performance issues was also severe. as we had previously 2.5 stable 10 
running that's why i reverted to it temporarily. further i have squid 3.0/14 in 
place as i have install 2.5 in separate directry and i can squid 3.0/14 run it 
anytime. i will also welcome if you tell me the most stable version of squid. 

2. secondly we are using RAID 5 and have very powerfull machine at present as 
compared to previous one, and previous was working good with the same amount of 
traffic and less powerfull system.

3. thirdly i have gigabit network card but yes i have 100 mb ethernet channel, 
but as defined in step 2 same link was working superb in previous setup.

4. i could not get chris robertson question regarding processors, i have two 
dual core xeon processors(3.2 ghz) and i captured stats at peak hours when 
performance was degraded.


So what should i do???

Regards,

--- On Wed, 7/1/09, Chris Robertson crobert...@gci.net wrote:

 From: Chris Robertson crobert...@gci.net
 Subject: Re: [squid-users] squid becomes very slow during peak hours
 To: squid-users@squid-cache.org
 Date: Wednesday, July 1, 2009, 2:25 AM
 goody goody wrote:
  Hi there,
 
  I am running squid 2.5 on freebsd 7,
 
 As Adrian said, upgrade.  2.6 (and 2.7) support kqueue
 under FreeBSD.
 
   and my squid box respond very slow during peak
 hours. my squid machine have twin dual core processors, 4
 ram and following hdds.
 
  Filesystem Size   
 Used   Avail Capacity  Mounted on
  /dev/da0s1a9.7G241M 
   8.7G 3%/
  devfs  1.0K 
   1.0K 
 0B   100%/dev
  /dev/da0s1f 73G 
35G 32G 
   52%/cache1
  /dev/da0s1g 73G   
 2.0G 65G 
3%/cache2
  /dev/da0s1e 39G   
 2.5G 33G 
7%/usr
  /dev/da0s1d 58G   
 6.4G 47G12% 
   /var
 
 
  below are the status and settings i have done. i need
 further guidance to  improve the box.
 
  last pid: 50046;  load averages: 
 1.02,  1.07,  1.02   


 up 
 
  7+20:35:29  15:21:42
  26 processes:  2 running, 24 sleeping
  CPU states: 25.4% user,  0.0% nice,  1.3%
 system,  0.8% interrupt, 72.5% idle
  Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache,
 112M Buf, 3708K Free
  Swap: 4096M Total, 20K Used, 4096M Free
 
PID USERNAME  THR
 PRI NICE   SIZERES STATE 
 C   TIME   WCPU COMMAND
  49819 sbt1 105   
 0   360M   351M
 CPU3   3  92:43 98.14% squid
487 root   
 1  960  4372K 
 2052K select 0  57:00  3.47% natd
646 root   
 1  960 16032K 12192K select
 3  54:28  0.00% snmpd

 SNIP
  pxy# iostat
tty 
da0 
   pass0 
cpu
   tin tout  KB/t tps 
 MB/s   KB/t tps  MB/s  us ni sy in
 id
 0  126
 12.79   5 
 0.06   0.00   0 
 0.00   4  0  1  0 95
 
  pxy# vmstat
   procs  memory   
   page 
   disks 
faults  cpu
   r b w avm   
 fre   flt  re  pi  po 
   fr  sr da0
 pa0   in   sy   cs
 us sy id
   1 3 0  458044 103268   
 12   0   0   0 
  
 30   5   0   0 
 273 1721 2553  4  1 95

 
 Those statistics show wildly different utilization. 
 The first (top, I 
 assume) shows 75% idle (or a whole CPU in use).  The
 next two show 95% 
 idle (in effect, one CPU 20% used).  How close (in
 time) were the 
 statistics gathered?
 
 
  some lines from squid.conf
  cache_mem 256 MB
  cache_replacement_policy heap LFUDA
  memory_replacement_policy heap GDSF
 
  cache_swap_low 80
  cache_swap_high 90
 
  cache_dir diskd /cache2 6 16 256 Q1=72 Q2=64
  cache_dir diskd /cache1 6 16 256 Q1=72 Q2=64
 
  cache_log /var/log/squid25/cache.log
  cache_access_log /var/log/squid25/access.log
  cache_store_log none
 
  half_closed_clients off
  maximum_object_size 1024 KB 

  if anyother info required, i shall provide.

 
 The types (and number) of ACLs in use would be of interest
 as well.
 
  Regards,
  .Goody.

 
 Chris
 
 


  



Re: [squid-users] squid becomes very slow during peak hours

2009-07-01 Thread Henrik Nordstrom
ons 2009-07-01 klockan 05:07 -0700 skrev goody goody:

 2. secondly we are using RAID 5 and have very powerfull machine at
 present as compared to previous one, and previous was working good
 with the same amount of traffic and less powerfull system.

Please note that RAID5 can degrade performance quite noticeably for
workloads like Squid with aufs/diskd. This due to the high amount of
random writes. But it depends a lot on the RAID controller, and
obviously the amount of drives installed in the RAID set and amount of
(battery backed) memory used by the RAID controller for write
optimizations..

 4. i could not get chris robertson question regarding processors, i
 have two dual core xeon processors(3.2 ghz) and i captured stats at
 peak hours when performance was degraded.

Squid only uses little more than one core at most.

 So what should i do???

Try Squid-2.7, and aufs instead of diskd.

And don't forget to check your cache.log to see if there is any relevant
warnings/errors.

Regards
Henrik



[squid-users] squid becomes very slow during peak hours

2009-07-01 Thread c0re dumped
Are you using amd64 ?

I had a lot of trouble with squid3 + FreeBSD7 amd64.

I was using squid 2.7 + dg + FreeSBD i386 and the server had a amazing
performance.

Then I switched to squid3 and fbsd7 amd64 and things became way worst.

I'm currently planning to downgrade to the previous solution as soon as I can.



--

To err is human, to blame it on somebody else shows management potential.



-- 

To err is human, to blame it on somebody else shows management potential.


Re: [squid-users] [squid 2.7 stable 3][ubuntu 9.04][shared usage with two proxies]

2009-07-01 Thread Chris Robertson

Volker Jahns wrote:

Hi all.

I don't know which headline is the best one, so sorry for probably a bad
one.

What is the problem?

I want work with two (or more) squid proxies called P1 and P2 (perhaps two
or three P2 kind server). But first it should work with simply one P2
server.

P1: The task for P1 is authorization in our HQ for all users. Even for the
users in the branches.

P2: The task for P2 is cashing only at the internet interface somewhere in
the network after P1 accept the authorization in the HQ also for the users
in the branches and moves the request direct to the requesting client.

Every branch is at least one network plus the HQ network. For reducing the
traffic in the whole network I want, that P2 sends the requested pages from
the internet or its own cache to branch and not via P1 in the HQ. The
background: no senseless traffic at the HQ gateway.

A short data flow the usual way:

Branch client x -- authorization HQ (P1) -- forward request -- internet
gateway (P2) -- get request internet or cache (P2) -- deliver page -- P1
-- deliver page client x Branch

A short data flow example how it should work:

Branch client x -- authorization HQ (P1) -- forward request -- internet
gateway (P2) -- get request internet or cache (P2) -- deliver page --
client x Branch

The difference seems to be small but it is important.

First question for general: does it work?
  


So, for example, your P1 proxy has an IP address of 10.0.0.5 and your P2 
proxy has an IP address of 10.10.10.5.  Your client (10.20.20.200) makes 
a request for a web object from 10.0.0.5 and (since it has already made 
a request and knows that authentication is required) sends it's 
authentication credentials.  10.0.0.5 sends the request to 10.10.10.5.  
There is no way for 10.10.10.5 to send a reply to 10.20.20.200, as there 
is no TCP connection to send the reply on.



Second question if it works: how do I configure this?
  


Your best bet would be to just send your clients to the P2 server, and 
let it pull the authentication from the source currently being used by 
the P1 server.



Until now I have P1 configured as sibling with a second cache (P2) as
parent, acting as origin server with no via and no http11.


Wait, what?  You have a forward proxy going to a reverse proxy, which is 
accelerating the entire internet, while stripping the Via header?



The authorization on P1 works and P2 try to get the requested page. But in fact 
on the way
from P1 to P2 the URI header information (simply a / was left) is lost and
in the end it does not working jet.
  


I imagine it's not...


Hope someone could help.
Volker
  


Chris



Re: [squid-users] Strange problem with sibling squids in accelerator mode

2009-07-01 Thread Chris Robertson

Lu, Roy wrote:

Hi,

I encountered a strange problem in using sibling squids as accelerators.
I have two accelerator squids, A and B (on two different boxes). They
are set up as sibling cache peers which both point to the same parent
cache_peer origin content server. I used the following commands to run
my test:

1. Load an object into A:

%squidclient -h host.name.of.A URL

2. Purge the object from B:

%squidclient -h host.name.of.B -m PURGE URL

3. Double check to make sure A has the object and B does not:

%squidclient -h host.name.of.A -m HEAD -H Cache-Control:
only-if-cached\n URL
Resulted in TCP_MEM_HIT

%squidclient -h host.name.of.B -m HEAD -H Cache-Control:
only-if-cached\n URL
Resulted in TCP_MISS

4. Request the object from B:

%squidclient -h host.name.of.B URL

Now the strange problem comes in. If I run the last step on the box A,
the ICP communication occurs, and in A's log I see UDP_HIT and
TCP_MEM_HIT, and in B's log I see TCP_MISS and SIBLING_HIT. However, if
I run the last step in B, then there is no ICP communication, squid B
simply goes to the parent origin server to get the object (in B's log
FIRST_UP_PARENT and nothing in A's log). When I run the same test with
squidclient on a third machine, the result is negative too. So it seems
that only if I run the squidclient utility on the same box where the
cached object is, then its sibling cache will retrieve the object from
this box.
  


I think this is due to your cache_peer and/or cache_peer_access lines...


The configuration for Squid A is:

#===

# ACL changes
#===

# acl for purge method
acl acl_purge method purge

# acl for origin app server
acl acl_gpl_app_servers dstdomain vmprodcagpcna04.firstamdata.net

# acl for cache peer squid server
acl acl_gpl_cache_sibling src host.name.of.A

#===

# http_access changes
#===

# allow purge method from localhost or sibling
http_access allow acl_purge localhost
http_access allow acl_purge acl_gpl_cache_sibling
http_access deny acl_purge

# allow http access to app servers and from cache sibling
http_access allow acl_gpl_app_servers
http_access allow acl_gpl_cache_sibling

#===

# icp_access changes
#===

# allow icp queries from cache sibling
icp_access allow acl_gpl_cache_sibling

#===

# cache_peer changes
#===

cache_peer vmprodcagpcna04.firstamdata.net parent 7533 0 no-query
originserver name=cp_gpl_app_servers
cache_peer host.name.of.A sibling 3128 3130 name=cp_gpl_cache_sibling
proxy-only
  


Did you REALLY set Squid A up as a sibling to ITSELF?


#===

# cache_peer_access changes
#===

# Allow peer connection to the origin app server and sibling cache peer
cache_peer_access cp_gpl_app_servers allow acl_gpl_app_servers
cache_peer_access cp_gpl_cache_sibling allow acl_gpl_cache_sibling
  


Assuming the cache_peer line above was a typo, and you have set Squid 
B as a peer, here you specifically state...


allow access to the origin server if the requested domain is 
vmprodcagpcna04.firstamdata.net

allow access to Squid B if the (original) requesting server is Squid A
(implied) deny access to any peers not specified above

...which would prevent any communication between cache_peers unless the 
request ORIGINATES from the server specified by 
acl_gpl_cache_sibling.  Third party, unrelated machines (clients) 
would never cause a peer access.  I think you would be better served with...


cache_peer_access cp_gpl_app_server allow acl_gpl_app_servers
cache_peer_access cp_gpl_cache_sibling allow acl_gpl_app_servers

...which would allow communication to peers for ANY traffic with the 
destination domain you are serving.  You have already specified the 
origin server, so Squid will go there if the page is not available from 
a peer (or if the peer times out, or...).




Configuration for B is almost identical except the host.name.of.A in the
acl and cache_peer tags is switched with B's. 


Can someone point out what might be the problem here?

Thanks.
Roy


Chris



RE: [squid-users] Strange problem with sibling squids in accelerator mode

2009-07-01 Thread Lu, Roy
Yes, that was a typo. I tried your suggestion, and it fixed it. Thank
you very much for your help!

Roy

-Original Message-
From: Chris Robertson [mailto:crobert...@gci.net] 
Sent: Wednesday, July 01, 2009 1:35 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Strange problem with sibling squids in
accelerator mode

Lu, Roy wrote:
 Hi,

 I encountered a strange problem in using sibling squids as
accelerators.
 I have two accelerator squids, A and B (on two different boxes). They
 are set up as sibling cache peers which both point to the same parent
 cache_peer origin content server. I used the following commands to run
 my test:

 1. Load an object into A:

   %squidclient -h host.name.of.A URL

 2. Purge the object from B:

   %squidclient -h host.name.of.B -m PURGE URL

 3. Double check to make sure A has the object and B does not:

   %squidclient -h host.name.of.A -m HEAD -H Cache-Control:
 only-if-cached\n URL
   Resulted in TCP_MEM_HIT

   %squidclient -h host.name.of.B -m HEAD -H Cache-Control:
 only-if-cached\n URL
   Resulted in TCP_MISS

 4. Request the object from B:

   %squidclient -h host.name.of.B URL

 Now the strange problem comes in. If I run the last step on the box A,
 the ICP communication occurs, and in A's log I see UDP_HIT and
 TCP_MEM_HIT, and in B's log I see TCP_MISS and SIBLING_HIT. However,
if
 I run the last step in B, then there is no ICP communication, squid B
 simply goes to the parent origin server to get the object (in B's log
 FIRST_UP_PARENT and nothing in A's log). When I run the same test with
 squidclient on a third machine, the result is negative too. So it
seems
 that only if I run the squidclient utility on the same box where the
 cached object is, then its sibling cache will retrieve the object from
 this box.
   

I think this is due to your cache_peer and/or cache_peer_access lines...

 The configuration for Squid A is:


#===
 
 # ACL changes

#===
 
 # acl for purge method
 acl acl_purge method purge

 # acl for origin app server
 acl acl_gpl_app_servers dstdomain vmprodcagpcna04.firstamdata.net

 # acl for cache peer squid server
 acl acl_gpl_cache_sibling src host.name.of.A


#===
 
 # http_access changes

#===
 
 # allow purge method from localhost or sibling
 http_access allow acl_purge localhost
 http_access allow acl_purge acl_gpl_cache_sibling
 http_access deny acl_purge

 # allow http access to app servers and from cache sibling
 http_access allow acl_gpl_app_servers
 http_access allow acl_gpl_cache_sibling


#===
 
 # icp_access changes

#===
 
 # allow icp queries from cache sibling
 icp_access allow acl_gpl_cache_sibling


#===
 
 # cache_peer changes

#===
 
 cache_peer vmprodcagpcna04.firstamdata.net parent 7533 0 no-query
 originserver name=cp_gpl_app_servers
 cache_peer host.name.of.A sibling 3128 3130 name=cp_gpl_cache_sibling
 proxy-only
   

Did you REALLY set Squid A up as a sibling to ITSELF?


#===
 
 # cache_peer_access changes

#===
 
 # Allow peer connection to the origin app server and sibling cache
peer
 cache_peer_access cp_gpl_app_servers allow acl_gpl_app_servers
 cache_peer_access cp_gpl_cache_sibling allow acl_gpl_cache_sibling
   

Assuming the cache_peer line above was a typo, and you have set Squid 
B as a peer, here you specifically state...

allow access to the origin server if the requested domain is 
vmprodcagpcna04.firstamdata.net
allow access to Squid B if the (original) requesting server is Squid
A
(implied) deny access to any peers not specified above

...which would prevent any communication between cache_peers unless the 
request ORIGINATES from the server specified by 
acl_gpl_cache_sibling.  Third party, unrelated machines (clients) 
would never cause a peer access.  I think you would be better served
with...

cache_peer_access cp_gpl_app_server allow acl_gpl_app_servers
cache_peer_access cp_gpl_cache_sibling allow acl_gpl_app_servers

...which would allow communication to peers for ANY traffic with the 
destination domain you are serving.  You have already specified the 
origin server, so Squid will go 

RE: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Alexandre DeAraujo
I am giving this one more try, but have been unsuccessful. Any help is always 
greatly appreciated.

Here is the setup:
Router:
Cisco 7200 IOS 12.4(25)
ip wccp web-cache redirect-list 11
access-list 11 permits only selective ip addresses to use wccp

Wan interface (Serial)
ip wccp web-cache redirect out

Global WCCP information:
Router information:
Router Identifier:  192.168.20.1
Protocol Version:   2.0

Service Identifier: web-cache
Number of Service Group Clients:1
Number of Service Group Routers:1
Total Packets s/w Redirected:   8797
Process:4723
Fast:   0
CEF:4074
Redirect access-list:   11
Total Packets Denied Redirect:  124925546
Total Packets Unassigned:   924514
Group access-list:  -none-
Total Messages Denied to Group: 0
Total Authentication failures:  0
Total Bypassed Packets Received:0

WCCP Client information:
WCCP Client ID: 192.168.20.2
Protocol Version:   2.0
State:  Usable
Initial Hash Info:  

Assigned Hash Info: 

Hash Allotment: 256 (100.00%)
Packets s/w Redirected: 306
Connect Time:   00:21:33
Bypassed Packets
Process:0
Fast:   0
CEF:0
Errors: 0

Clients are on FEthernet0/1
Squid server is the only device on FEthernet0/3

Squid Server:
eth0  Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D  
  inet addr:192.168.20.2  Bcast:192.168.20.7  Mask:255.255.255.248
  inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)

gre0  Link encap:UNSPEC  HWaddr 
00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00  
  inet addr:192.168.20.2  Mask:255.255.255.248
  UP RUNNING NOARP  MTU:1476  Metric:1
  RX packets:400 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)

/etc/rc.d/rc.local file:
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
modprobe ip_gre
ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind

/etc/sysconfig/iptables file:
# Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
*mangle
:PREROUTING ACCEPT [166:11172]
:INPUT ACCEPT [164:8718]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [130:12272]
:POSTROUTING ACCEPT [130:12272]
:DIVERT - [0:0]
-A DIVERT -j MARK --set-xmark 0x1/0x 
-A DIVERT -j ACCEPT 
-A PREROUTING -p tcp -m socket -j DIVERT 
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip 
192.168.20.2 --tproxy-mark 0x1/0x1 
COMMIT
# Completed on Wed Jul  1 03:32:55 2009
# Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [160:15168]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -i gre0 -j ACCEPT 
-A INPUT -p gre -j ACCEPT 
-A INPUT -i eth0 -p gre -j ACCEPT 
-A INPUT -j RH-Firewall-1-INPUT 
-A FORWARD -j RH-Firewall-1-INPUT 
-A RH-Firewall-1-INPUT -s 192.168.20.1/32 -p udp -m udp --dport 2048 -j ACCEPT 
-A RH-Firewall-1-INPUT -i lo -j ACCEPT 
-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT 
-A RH-Firewall-1-INPUT -p esp -j ACCEPT 
-A RH-Firewall-1-INPUT -p ah -j ACCEPT 
-A RH-Firewall-1-INPUT -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT 
-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT 
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT 
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT 
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT 
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited 
COMMIT
# Completed on Wed Jul  1 03:32:55 2009

-squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl testing src 10.10.10.0/24
acl SSL_ports port 

[squid-users] AuthNTLMConfig (squid 3.0) unrecognised

2009-07-01 Thread Beavis
Hi,

 I was successful in running ntlm_auth. (kerberos - OK, samba - OK)
with the following config on my squid.conf

auth_param ntlm program /usr/local/samba/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 2 minutes
auth_param ntlm use_ntlm_negotiate off

auth_param basic program /usr/local/samba/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid Proxy
auth_param basic credentialsttl 5 hours

acl ntlm proxy_auth REQUIRED
http_access allow ntlm


I can run Squid by issuing squid -D but it will still display the
following error, and continue to run squid.

2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
scheme parameter 'max_challenge_reuses'
2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
scheme parameter 'max_challenge_lifetime'
2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
scheme parameter 'use_ntlm_negotiate'

any help would be awesomely appreciated.

-b

-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


Re: [squid-users] AuthNTLMConfig (squid 3.0) unrecognised

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 18:43:21 -0600, Beavis pfu...@gmail.com wrote:
 Hi,
 
  I was successful in running ntlm_auth. (kerberos - OK, samba - OK)
 with the following config on my squid.conf
 
 auth_param ntlm program /usr/local/samba/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 5
 auth_param ntlm max_challenge_reuses 0
 auth_param ntlm max_challenge_lifetime 2 minutes
 auth_param ntlm use_ntlm_negotiate off
 
 auth_param basic program /usr/local/samba/bin/ntlm_auth
 --helper-protocol=squid-2.5-basic
 auth_param basic children 5
 auth_param basic realm Squid Proxy
 auth_param basic credentialsttl 5 hours
 
 acl ntlm proxy_auth REQUIRED
 http_access allow ntlm
 
 
 I can run Squid by issuing squid -D but it will still display the
 following error, and continue to run squid.
 
 2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
 scheme parameter 'max_challenge_reuses'
 2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
 scheme parameter 'max_challenge_lifetime'
 2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
 scheme parameter 'use_ntlm_negotiate'
 
 any help would be awesomely appreciated.
 
 -b

http://www.mail-archive.com/squid-users@squid-cache.org/msg43675.html

PS. I'm not sure why Visolve list it still in their manuals. They have lots
of nice simple explanations, but its not very current.

Amos



[squid-users] how to capture https transactions

2009-07-01 Thread Fulko Hew
I'm new to squid, and I thought I could use it as a proxy to detect transactions
that don't succeed and return a page to the browser that would display
an error page that re-submitted the original request (again) say 15 seconds
later.  (I want to use this to hide network and server failure from
end users at a kiosk.)

I've figured out how to do most of this for http transactions, but my
real target
uses https and when I look at the squid logs I see a transaction called
CONNECT ... DIRECT ...

and these don't seem to go through, or at the very least it seems as though
the connections are not proxied, and hence DNS resolution and connection
failures aren't captured and don't result in squid error pages returned to the
browser.

Is this actually possible, and if so... what directives should I be looking
at for the config file.

Any suggestions and/or comments are welcome.

TIA
Fulko


RE: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 16:59:03 -0700, Alexandre DeAraujo al...@cal.net
wrote:
 I am giving this one more try, but have been unsuccessful. Any help is
 always greatly appreciated.

I've had the opportunity to work through at a very low level with someone
recently and found the major issue with WCCP and TPROXY. It seems the
claimed auto-detection and bypass of proxy outbound traffic depends on IP
address in the outbound packet (under TPROXYv4 this is the wrong address).
Check that you can identify the outbound proxy traffic using something
other than the IP and add a bypass rule manually. Good things to test for
are source interface (ethX, MAC, EUI-64), or apparently service group.
Maybe TOS as well if you also have Squid set that.

The Features/Tproxy4 wiki page is updated with this and some details on
what to look for and some other general options that may solve it.

Amos

 
 Here is the setup:
 Router:
 Cisco 7200 IOS 12.4(25)
 ip wccp web-cache redirect-list 11
 access-list 11 permits only selective ip addresses to use wccp
 
 Wan interface (Serial)
 ip wccp web-cache redirect out
 
 Global WCCP information:
 Router information:
 Router Identifier:192.168.20.1
 Protocol Version: 2.0
 
 Service Identifier: web-cache
 Number of Service Group Clients:  1
 Number of Service Group Routers:  1
 Total Packets s/w Redirected: 8797
 Process:  4723
 Fast: 0
 CEF:  4074
 Redirect access-list: 11
 Total Packets Denied Redirect:124925546
 Total Packets Unassigned: 924514
 Group access-list:-none-
 Total Messages Denied to Group:   0
 Total Authentication failures:0
 Total Bypassed Packets Received:  0
 
 WCCP Client information:
 WCCP Client ID:   192.168.20.2
 Protocol Version: 2.0
 State:Usable
 Initial Hash Info:
   
 Assigned Hash Info:   
   
 Hash Allotment:   256 (100.00%)
 Packets s/w Redirected:   306
 Connect Time: 00:21:33
 Bypassed Packets
 Process:  0
 Fast: 0
 CEF:  0
 Errors:   0
 
 Clients are on FEthernet0/1
 Squid server is the only device on FEthernet0/3
 
 Squid Server:
 eth0  Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D  
   inet addr:192.168.20.2  Bcast:192.168.20.7 
Mask:255.255.255.248
   inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
   TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000 
   RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)
 
 gre0  Link encap:UNSPEC  HWaddr
 00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00  
   inet addr:192.168.20.2  Mask:255.255.255.248
   UP RUNNING NOARP  MTU:1476  Metric:1
   RX packets:400 errors:0 dropped:0 overruns:0 frame:0
   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0 
   RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)
 
 /etc/rc.d/rc.local file:
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 modprobe ip_gre
 ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
 echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind
 
 /etc/sysconfig/iptables file:
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *mangle
 :PREROUTING ACCEPT [166:11172]
 :INPUT ACCEPT [164:8718]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [130:12272]
 :POSTROUTING ACCEPT [130:12272]
 :DIVERT - [0:0]
 -A DIVERT -j MARK --set-xmark 0x1/0x 
 -A DIVERT -j ACCEPT 
 -A PREROUTING -p tcp -m socket -j DIVERT 
 -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip
 192.168.20.2 --tproxy-mark 0x1/0x1 
 COMMIT
 # Completed on Wed Jul  1 03:32:55 2009
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [160:15168]
 :RH-Firewall-1-INPUT - [0:0]
 -A INPUT -i gre0 -j ACCEPT 
 -A INPUT -p gre -j ACCEPT 
 -A INPUT -i eth0 -p gre -j ACCEPT 
 -A INPUT -j RH-Firewall-1-INPUT 
 -A FORWARD -j RH-Firewall-1-INPUT 
 -A RH-Firewall-1-INPUT -s 192.168.20.1/32 -p udp -m udp --dport 2048 -j
 ACCEPT 
 -A RH-Firewall-1-INPUT -i lo -j ACCEPT 
 -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT 
 -A RH-Firewall-1-INPUT -p esp -j ACCEPT 
 -A RH-Firewall-1-INPUT -p 

Re: [squid-users] how to capture https transactions

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 20:55:06 -0400, Fulko Hew fulko@gmail.com wrote:
 I'm new to squid, and I thought I could use it as a proxy to detect
 transactions
 that don't succeed and return a page to the browser that would display
 an error page that re-submitted the original request (again) say 15
seconds
 later.  (I want to use this to hide network and server failure from
 end users at a kiosk.)
 
 I've figured out how to do most of this for http transactions, but my
 real target
 uses https and when I look at the squid logs I see a transaction called
 CONNECT ... DIRECT ...
 
 and these don't seem to go through, or at the very least it seems as
though
 the connections are not proxied, and hence DNS resolution and connection
 failures aren't captured and don't result in squid error pages returned
to
 the
 browser.

Close. For https:// the browser is makign regular HTTP request, wrapped in
SSL encryption. Then that itself is wrapped again inside a CONNECT.

Squid just opens a CONNECT tunnel and shovels the bytes through. The SSL
connection is made inside the tunnel direct for client to server, and the
HTTPS stuff happens without Squid.

IIRC there was some problem found with browsers displaying any custom
response to a CONNECT failure. You want to look at the deny_info
ERR_CONNECT_FAIL page replacement or such.

 
 Is this actually possible, and if so... what directives should I be
looking
 at for the config file.

Squid 3.1 provides a SslBump feature to unwrap the CONNECT and proxy the
SSL portions. But decrypting the SSL link mid-way is called a man-in-middle
security attack. The browser security pops up a warning dialog box to the
users every time this happens. I would not think this will be popular or
good for a kiosk situation.

Amos



Re: [squid-users] squid-3.1.0.9 - error directory not created automatically

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 08:55:25 +0200, Zeller, Jan jan.zel...@id.unibe.ch
wrote:
 dear list,
 
 I compiled squid-3.1.0.9 like this :
 
 $ squid -v
 Squid Cache: Version 3.1.0.9
 configure options:  '--prefix=/opt/squid-3.1.0.9' '--enable-icap-client'
 '--enable-ssl' '--enable-linux-netfilter' '--enable-http-violations'
 '--with-filedescriptors=32768' '--with-pthreads' '--disable-ipv6'
 --with-squid=/usr/local/src/squid-3.1.0.9 --enable-ltdl-convenience
 
 Unfortunately there is no 'error' directory created !? Why ?
squid-3.1.0.7
 created this directory automatically.
 Should I explicitly download the language pack from
 http://www.squid-cache.org/Versions/langpack/ ?
 
 kind regards,

Small muckup changing the install locations.  I'm aware of and working on
this.
Yes, the langpack download+install would be a good work-around.

Amos


Re: [squid-users] AuthNTLMConfig (squid 3.0) unrecognised

2009-07-01 Thread Beavis
thanks amos.. I actually took out those lines and it worked ok =)



On Wed, Jul 1, 2009 at 6:54 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 On Wed, 1 Jul 2009 18:43:21 -0600, Beavis pfu...@gmail.com wrote:
 Hi,

  I was successful in running ntlm_auth. (kerberos - OK, samba - OK)
 with the following config on my squid.conf

 auth_param ntlm program /usr/local/samba/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 5
 auth_param ntlm max_challenge_reuses 0
 auth_param ntlm max_challenge_lifetime 2 minutes
 auth_param ntlm use_ntlm_negotiate off

 auth_param basic program /usr/local/samba/bin/ntlm_auth
 --helper-protocol=squid-2.5-basic
 auth_param basic children 5
 auth_param basic realm Squid Proxy
 auth_param basic credentialsttl 5 hours

 acl ntlm proxy_auth REQUIRED
 http_access allow ntlm


 I can run Squid by issuing squid -D but it will still display the
 following error, and continue to run squid.

 2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
 scheme parameter 'max_challenge_reuses'
 2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
 scheme parameter 'max_challenge_lifetime'
 2009/07/01 18:37:14| AuthNTLMConfig::parse: unrecognised ntlm auth
 scheme parameter 'use_ntlm_negotiate'

 any help would be awesomely appreciated.

 -b

 http://www.mail-archive.com/squid-users@squid-cache.org/msg43675.html

 PS. I'm not sure why Visolve list it still in their manuals. They have lots
 of nice simple explanations, but its not very current.

 Amos





-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


Re: [squid-users] [squid 2.7 stable 3][ubuntu 9.04][shared usage with two proxies]

2009-07-01 Thread Amos Jeffries
On Wed, 01 Jul 2009 12:16:28 -0800, Chris Robertson crobert...@gci.net
wrote:
 Volker Jahns wrote:
 Hi all.

 I don't know which headline is the best one, so sorry for probably a bad
 one.

 What is the problem?

 I want work with two (or more) squid proxies called P1 and P2 (perhaps
 two
 or three P2 kind server). But first it should work with simply one P2
 server.

 P1: The task for P1 is authorization in our HQ for all users. Even for
 the
 users in the branches.

 P2: The task for P2 is cashing only at the internet interface somewhere
 in
 the network after P1 accept the authorization in the HQ also for the
 users
 in the branches and moves the request direct to the requesting client.

 Every branch is at least one network plus the HQ network. For reducing
 the
 traffic in the whole network I want, that P2 sends the requested pages
 from
 the internet or its own cache to branch and not via P1 in the HQ. The
 background: no senseless traffic at the HQ gateway.

 A short data flow the usual way:

 Branch client x -- authorization HQ (P1) -- forward request --
 internet
 gateway (P2) -- get request internet or cache (P2) -- deliver page --
 P1
 -- deliver page client x Branch

 A short data flow example how it should work:

 Branch client x -- authorization HQ (P1) -- forward request --
 internet
 gateway (P2) -- get request internet or cache (P2) -- deliver page --
 client x Branch

 The difference seems to be small but it is important.

 First question for general: does it work?
   
 
 So, for example, your P1 proxy has an IP address of 10.0.0.5 and your P2 
 proxy has an IP address of 10.10.10.5.  Your client (10.20.20.200) makes 
 a request for a web object from 10.0.0.5 and (since it has already made 
 a request and knows that authentication is required) sends it's 
 authentication credentials.  10.0.0.5 sends the request to 10.10.10.5.  
 There is no way for 10.10.10.5 to send a reply to 10.20.20.200, as there 
 is no TCP connection to send the reply on.
 
 Second question if it works: how do I configure this?
   
 
 Your best bet would be to just send your clients to the P2 server, and 
 let it pull the authentication from the source currently being used by 
 the P1 server.
 
 Until now I have P1 configured as sibling with a second cache (P2) as
 parent, acting as origin server with no via and no http11.
 
 Wait, what?  You have a forward proxy going to a reverse proxy, which is 
 accelerating the entire internet, while stripping the Via header?

It happens. Some people mistake the meaning of 'originserver'. Others
discover that they get an auth popup when they do this. Without realizing
the consequences of where those internal credentials are being displayed.


What you want Volker, is to cache_peer ... login=PASS down the parent
link. Drop the 'originserver' option. And an Chris said, allow as many
squid as possible to do their own auth checks to the right source.


 
 The authorization on P1 works and P2 try to get the requested page. But
 in fact on the way
 from P1 to P2 the URI header information (simply a / was left) is lost
 and
 in the end it does not working jet.
   
 
 I imagine it's not...
 
 Hope someone could help.
 Volker
   
 
 Chris

Amos



Re: [squid-users] how to capture https transactions

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 18:46:32 -0700, George Herbert
george.herb...@gmail.com
wrote:
 On Wed, Jul 1, 2009 at 6:13 PM, Amos Jeffriessqu...@treenet.co.nz
wrote:
 On Wed, 1 Jul 2009 20:55:06 -0400, Fulko Hew fulko@gmail.com
wrote:
 I'm new to squid, and I thought I could use it as a proxy to detect
 transactions
 that don't succeed and return a page to the browser that would display
 an error page that re-submitted the original request (again) say 15
 seconds
 later.  (I want to use this to hide network and server failure from
 end users at a kiosk.)

 I've figured out how to do most of this for http transactions, but my
 real target
 uses https and when I look at the squid logs I see a transaction called
 CONNECT ... DIRECT ...

 and these don't seem to go through, or at the very least it seems as
 though
 the connections are not proxied, and hence DNS resolution and
connection
 failures aren't captured and don't result in squid error pages returned
 to
 the
 browser.

 Close. For https:// the browser is makign regular HTTP request, wrapped
 in
 SSL encryption. Then that itself is wrapped again inside a CONNECT.

 Squid just opens a CONNECT tunnel and shovels the bytes through. The SSL
 connection is made inside the tunnel direct for client to server, and
the
 HTTPS stuff happens without Squid.

 IIRC there was some problem found with browsers displaying any custom
 response to a CONNECT failure. You want to look at the deny_info
 ERR_CONNECT_FAIL page replacement or such.


 Is this actually possible, and if so... what directives should I be
 looking
 at for the config file.

 Squid 3.1 provides a SslBump feature to unwrap the CONNECT and proxy the
 SSL portions. But decrypting the SSL link mid-way is called a
 man-in-middle
 security attack. The browser security pops up a warning dialog box to
the
 users every time this happens. I would not think this will be popular or
 good for a kiosk situation.
 
 I don't know if Squid knows how to do this (haven't checked), but
 other load balancers, accelerators, and firewalls can sometimes have
 the site SSL / https keys installed to allow them to interact with
 https content going back and forth.  There's a ethereal / wireshark
 module to provide it your site key to decrypt that traffic.
 
 That does only work if:
 
 a) you own both ends of the link (not clear from first email),
 b) your software supports it
 c) you trust your proxies with your site keys

That is exactly the SslBump feature I mentioned. It causes browser popups
when they validate the cert and discover that the domain they are
contacting has the wrong credentials.
Users often ignore such popups, but for this public kiosk situation its
very likely to lead to complaints and a minor scandal.

Amos



Re: [squid-users] how to capture https transactions

2009-07-01 Thread George Herbert
On Wed, Jul 1, 2009 at 6:13 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 On Wed, 1 Jul 2009 20:55:06 -0400, Fulko Hew fulko@gmail.com wrote:
 I'm new to squid, and I thought I could use it as a proxy to detect
 transactions
 that don't succeed and return a page to the browser that would display
 an error page that re-submitted the original request (again) say 15
 seconds
 later.  (I want to use this to hide network and server failure from
 end users at a kiosk.)

 I've figured out how to do most of this for http transactions, but my
 real target
 uses https and when I look at the squid logs I see a transaction called
 CONNECT ... DIRECT ...

 and these don't seem to go through, or at the very least it seems as
 though
 the connections are not proxied, and hence DNS resolution and connection
 failures aren't captured and don't result in squid error pages returned
 to
 the
 browser.

 Close. For https:// the browser is makign regular HTTP request, wrapped in
 SSL encryption. Then that itself is wrapped again inside a CONNECT.

 Squid just opens a CONNECT tunnel and shovels the bytes through. The SSL
 connection is made inside the tunnel direct for client to server, and the
 HTTPS stuff happens without Squid.

 IIRC there was some problem found with browsers displaying any custom
 response to a CONNECT failure. You want to look at the deny_info
 ERR_CONNECT_FAIL page replacement or such.


 Is this actually possible, and if so... what directives should I be
 looking
 at for the config file.

 Squid 3.1 provides a SslBump feature to unwrap the CONNECT and proxy the
 SSL portions. But decrypting the SSL link mid-way is called a man-in-middle
 security attack. The browser security pops up a warning dialog box to the
 users every time this happens. I would not think this will be popular or
 good for a kiosk situation.

I don't know if Squid knows how to do this (haven't checked), but
other load balancers, accelerators, and firewalls can sometimes have
the site SSL / https keys installed to allow them to interact with
https content going back and forth.  There's a ethereal / wireshark
module to provide it your site key to decrypt that traffic.

That does only work if:

a) you own both ends of the link (not clear from first email),
b) your software supports it
c) you trust your proxies with your site keys


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] squid becomes very slow during peak hours

2009-07-01 Thread Brett Glass

I wonder if your problem might be diskd.

At one time, diskd was hailed as a great way to speed up a cache, 
but that was back in the days when caches had a small fraction of 
the load they do today. Nowadays, it appears that diskd's overhead 
creates a huge bottleneck.


Have you tried COSS or AUFS?

--Brett Glass



[squid-users] fetch page error

2009-07-01 Thread Tech W.

Hello,

When I fetch this page via Squid (latest 3.0 version):

html
body onload=document.submitFrm.submit()
form method=post name=submitFrm action=index2.shtml 
target=_self
/form
/body
/html


Most time I got it successfully, but sometime got it failed.
Squid responsed with this header:

X-Squid-Error: ERR_CONNECT_FAIL 113


Why this happened? Thanks.

Regards.


  

Access Yahoo!7 Mail on your mobile. Anytime. Anywhere.
Show me how: http://au.mobile.yahoo.com/mail


Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Adrian Chadd
This won't work. You're only redirecting half of the traffic flow with
the wccp web-cache service group. The tproxy code is probably
correctly trying to originate packets -from- the client IP address to
the upstream server but because you're only redirecting half of the
packets (ie, packets from original client to upstream, and not also
the packets from the upstream to the client - and this is the flow
that needs to be hijacked!) things will hang.

You need to read the TPROXY2 examples and look at the Cisco/Squid WCCP
setup. There are two service groups configured - 80 and 90 - which
redirect client - server and server-client respectively. They have
the right bits set in the service group definitions to redirect the
traffic correctly.

The WCCPv2/TPROXY4 pages are hilariously unclear. I ended up having to
find the TPROXY2 pages to extract the right WCCPv2 setup to use,
then combine that with the TPROXY4 rules. That is fine for me (I know
a thing or two about this) but it should all be made much, much
clearer for people trying to set this up.

As I suggested earlier, you may wish to consider fleshing out an
interception section in the Wiki complete with explanations about how
all of the various parts of the puzzle hold together.

2c,


adrian

2009/7/2 Alexandre DeAraujo al...@cal.net:
 I am giving this one more try, but have been unsuccessful. Any help is always 
 greatly appreciated.

 Here is the setup:
 Router:
 Cisco 7200 IOS 12.4(25)
 ip wccp web-cache redirect-list 11
 access-list 11 permits only selective ip addresses to use wccp

 Wan interface (Serial)
 ip wccp web-cache redirect out

 Global WCCP information:
 Router information:
 Router Identifier:                      192.168.20.1
 Protocol Version:                       2.0

 Service Identifier: web-cache
 Number of Service Group Clients:        1
 Number of Service Group Routers:        1
 Total Packets s/w Redirected:   8797
 Process:                                4723
 Fast:                                   0
 CEF:                                    4074
 Redirect access-list:                   11
 Total Packets Denied Redirect:  124925546
 Total Packets Unassigned:               924514
 Group access-list:                      -none-
 Total Messages Denied to Group: 0
 Total Authentication failures:          0
 Total Bypassed Packets Received:        0

 WCCP Client information:
 WCCP Client ID: 192.168.20.2
 Protocol Version:       2.0
 State:                  Usable
 Initial Hash Info:      
                        
 Assigned Hash Info:     
                        
 Hash Allotment: 256 (100.00%)
 Packets s/w Redirected: 306
 Connect Time:           00:21:33
 Bypassed Packets
 Process:                0
 Fast:                   0
 CEF:                    0
 Errors:                 0

 Clients are on FEthernet0/1
 Squid server is the only device on FEthernet0/3
 
 Squid Server:
 eth0      Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D
          inet addr:192.168.20.2  Bcast:192.168.20.7  Mask:255.255.255.248
          inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)

 gre0      Link encap:UNSPEC  HWaddr 
 00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00
          inet addr:192.168.20.2  Mask:255.255.255.248
          UP RUNNING NOARP  MTU:1476  Metric:1
          RX packets:400 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)
 
 /etc/rc.d/rc.local file:
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 modprobe ip_gre
 ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
 echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind
 
 /etc/sysconfig/iptables file:
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *mangle
 :PREROUTING ACCEPT [166:11172]
 :INPUT ACCEPT [164:8718]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [130:12272]
 :POSTROUTING ACCEPT [130:12272]
 :DIVERT - [0:0]
 -A DIVERT -j MARK --set-xmark 0x1/0x
 -A DIVERT -j ACCEPT
 -A PREROUTING -p tcp -m socket -j DIVERT
 -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip 
 192.168.20.2 --tproxy-mark 0x1/0x1
 COMMIT
 # Completed on Wed Jul  1 03:32:55 2009
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *filter
 :INPUT ACCEPT [0:0]
 :FORWARD ACCEPT 

RE: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-01 Thread Ritter, Nicholas
I have not finished updating the wiki article for the CentOS example, BTW.

I will do this by tomorrow or possibly tonight yet.

Nick


-Original Message-
From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf Of 
Adrian Chadd
Sent: Wednesday, July 01, 2009 11:10 PM
To: Alexandre DeAraujo
Cc: Ritter, Nicholas; squid-users
Subject: Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

This won't work. You're only redirecting half of the traffic flow with
the wccp web-cache service group. The tproxy code is probably
correctly trying to originate packets -from- the client IP address to
the upstream server but because you're only redirecting half of the
packets (ie, packets from original client to upstream, and not also
the packets from the upstream to the client - and this is the flow
that needs to be hijacked!) things will hang.

You need to read the TPROXY2 examples and look at the Cisco/Squid WCCP
setup. There are two service groups configured - 80 and 90 - which
redirect client - server and server-client respectively. They have
the right bits set in the service group definitions to redirect the
traffic correctly.

The WCCPv2/TPROXY4 pages are hilariously unclear. I ended up having to
find the TPROXY2 pages to extract the right WCCPv2 setup to use,
then combine that with the TPROXY4 rules. That is fine for me (I know
a thing or two about this) but it should all be made much, much
clearer for people trying to set this up.

As I suggested earlier, you may wish to consider fleshing out an
interception section in the Wiki complete with explanations about how
all of the various parts of the puzzle hold together.

2c,


adrian

2009/7/2 Alexandre DeAraujo al...@cal.net:
 I am giving this one more try, but have been unsuccessful. Any help is always 
 greatly appreciated.

 Here is the setup:
 Router:
 Cisco 7200 IOS 12.4(25)
 ip wccp web-cache redirect-list 11
 access-list 11 permits only selective ip addresses to use wccp

 Wan interface (Serial)
 ip wccp web-cache redirect out

 Global WCCP information:
 Router information:
 Router Identifier:                      192.168.20.1
 Protocol Version:                       2.0

 Service Identifier: web-cache
 Number of Service Group Clients:        1
 Number of Service Group Routers:        1
 Total Packets s/w Redirected:   8797
 Process:                                4723
 Fast:                                   0
 CEF:                                    4074
 Redirect access-list:                   11
 Total Packets Denied Redirect:  124925546
 Total Packets Unassigned:               924514
 Group access-list:                      -none-
 Total Messages Denied to Group: 0
 Total Authentication failures:          0
 Total Bypassed Packets Received:        0

 WCCP Client information:
 WCCP Client ID: 192.168.20.2
 Protocol Version:       2.0
 State:                  Usable
 Initial Hash Info:      
                        
 Assigned Hash Info:     
                        
 Hash Allotment: 256 (100.00%)
 Packets s/w Redirected: 306
 Connect Time:           00:21:33
 Bypassed Packets
 Process:                0
 Fast:                   0
 CEF:                    0
 Errors:                 0

 Clients are on FEthernet0/1
 Squid server is the only device on FEthernet0/3
 
 Squid Server:
 eth0      Link encap:Ethernet  HWaddr 00:14:22:21:A1:7D
          inet addr:192.168.20.2  Bcast:192.168.20.7  Mask:255.255.255.248
          inet6 addr: fe80::214:22ff:fe21:a17d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3325 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2606 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:335149 (327.2 KiB)  TX bytes:394943 (385.6 KiB)

 gre0      Link encap:UNSPEC  HWaddr 
 00-00-00-00-CB-BF-F4-FF-00-00-00-00-00-00-00-00
          inet addr:192.168.20.2  Mask:255.255.255.248
          UP RUNNING NOARP  MTU:1476  Metric:1
          RX packets:400 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:31760 (31.0 KiB)  TX bytes:0 (0.0 b)
 
 /etc/rc.d/rc.local file:
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 modprobe ip_gre
 ifconfig gre0 192.168.20.2 netmask 255.255.255.248 up
 echo 1  /proc/sys/net/ipv4/ip_nonlocal_bind
 
 /etc/sysconfig/iptables file:
 # Generated by iptables-save v1.4.4 on Wed Jul  1 03:32:55 2009
 *mangle
 :PREROUTING ACCEPT [166:11172]
 :INPUT ACCEPT [164:8718]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [130:12272]
 

Re: Fw: [squid-users] NTLM Auth and Java applets (Any update)

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 12:56:43 +0200, Gontzal gontz...@gmail.com wrote:
 Hi,
 
 I've recompiled squid, now 3.0 stable 16 on a non-production opensuse
 10.3 server with the --enable-http-violations option
 I've added the following lines to my squid.conf file:
 
 acl Java browser Java/1.4 Java/1.5 Java/1.6
 
 header_access Proxy-Authenticate deny Java
 header_replace Proxy-Authenticate Basic realm=
 
 The header tags are before the http_access tags, I don't know if it is
 correct. I've also disable the option http_access allow Java
 
 Squid runs correctly but when i check for java, it doesn't work, it
 don't ask for basic auth and doesn't show the java applet page.
 
 On the access log it shows lines like this one:
 
 (01/Jul 12:46:01) (TCP_DENIED/407/NONE) (172.28.3.186=172.28.129.250)
 (tp.seg-social.es:443) text/html-2226bytes 1ms
 
 I've changed the identity of my browser from firefox to java and it
 browses using ntlm auth instead of asking for user/passwd
 
 Where can be the problem?

In squid-3 the header_access has been broken in half.

I believe you are needing to use reply_header_access.

Amos

 
 Thanks again!
 
 2009/6/30 Amos Jeffries squ...@treenet.co.nz:


 I agree this does look like a good clean solution. I'll look at
 implementing a small on/off toggle to do only this change for safer Java
 bypass. May not be very soon though. What version of Squid are you
using?

 Meanwhile yes, you do have to add the option to the ./configure options
 and
 re-compile = re-install Squid.
 The install process if done right should not alter existing squid.conf
 and
 be a simple drop-in to the existing install. But a backup is worth doing
 just in case.
 If currently using a packages Squid, you may want to contact the package
 maintainer for any help on the configure and install steps.

 Amos

 On Mon, 29 Jun 2009 10:40:06 +0200, Gontzal gontz...@gmail.com wrote:
 Hi Kevin,


 Thanks for your post, I think is a very good solution to the Java
 security
 hole.

 I've seen that for using header_access and header_replace you need to
 compile with the --enable-http-violations. My question is, if I
 compiled squid without this option, is there any way to add this
 feature or I've to compile entire squid again? In this case, should I
 save my configuration files?

 Where should I put these lines, after acls?

 Thanks again

 Gontzal

 2009/6/27 Kevin Blackwell akblack...@gmail.com:
 This what your looking for?

 acl javaNtlmFix browser -i java
 acl javaConnect method CONNECT
 header_access Proxy-Authenticate deny javaNtlmFix javaConnect
 header_replace Proxy-Authenticate Basic realm=Internet

 now only https/ssl access from java will have basic auth and so a
 password dialog.
 normal http access will work with ntlm challenge response.

 thanxs again

 markus

-Ursprüngliche Nachricht-
Von: Rietzler, Markus (Firma Rietzler Software / RZF)
Gesendet: Dienstag, 16. Oktober 2007 18:17
An: 'Chris Robertson'; squid-users@squid-cache.org
Betreff: AW: [squid-users] force basic NTLM-auth for certain
clients/urls

thanxs for that hint - it worked as a fix

i have addes this to my squid.conf

acl javaNtlmFix browser -i java
header_access Proxy-Authenticate deny javaNtlmFix
header_replace Proxy-Authenticate Basic realm=Internet Access

now any java-client (java web start, java or applets in
browser) will only see the basic auth scheme.
a username/password dialog pops up and i have to enter my credentials.

any other client (firefox, ie) still se both NTLM and Basic
scheme and use NTLM challenge response to authenticate...

the little drawback is, that there is that little nasty dialog
but connection via proxy is working...

thanxs

markus


 On Sat, May 9, 2009 at 12:13 AM, Nitin
 Bhadaurianitin.bhadau...@tetrain.com wrote:
 Dear All,

 Please reply if we have some solution for the problem. I am stuck
with
 the
 problem my server is live and i can't afforded to allow the java
sites
 to
 unauthorized users in the network.

 Regards,
 Nitin B.


 Nitin Bhadauria wrote:

 Dear All,


 I have the same problem ..

 Everytime a browser proxying through squid tries to load a secure
 java
 applet, it comes up with a red x where the java applet should be.


 So I have bybass those sites for authentication, But the problem is
 users
 how don't have permission to access internet they are also able to
 access
 those sites.

 Please update if we had find any other solution for the problem.

 Thanks in advance for any reply.

 Regards,
 Nitin Bhadauria










[squid-users] ntlm group acl's

2009-07-01 Thread Beavis
is it possible for squid to have the option where it can be tailored
to apply ACL's based on groups on AD?

any help would be awesomely appreciated.

regards,
-b

-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


Re: [squid-users] ntlm group acl's

2009-07-01 Thread Amos Jeffries
On Wed, 1 Jul 2009 23:32:36 -0600, Beavis pfu...@gmail.com wrote:
 is it possible for squid to have the option where it can be tailored
 to apply ACL's based on groups on AD?
 
 any help would be awesomely appreciated.
 
 regards,
 -b

http://wiki.squid-cache.org/SquidFaq/SquidAcl

ACLs are applied left-to-right within each access line. Placing an ACL
which matches a specific group left of an ACL which tests something else
will cause the second ACL to only be checked if the first ACL matches.
ie
  http_access allow AdminGroup !FakeAdminClones

This is one of the ordering techniques I recommend for optimal minimum
impact of regex and other slow ACL.

Amos