WCCP2 patch

2008-01-28 Thread Steven Wilton
We've had a couple of problems on our caches using WCCP2 + tproxy  
where the caches registered in a different order with the different  
WCCP services (traffic to the web server VS traffic from the web  
server).  This resulted in the assignment algorithm sending traffic  
coming from a particular client to one cache, and the reply traffic to  
a different cache.


My solution is to order the linked list of web caches in the WCCP2  
code to ensure that each cache will be assigned the same has map  
regardless of when the cache registers with the router.


I've attached the patch, which we're using in production.  I'll commit  
it to head shortly unless there's any objections, and I'm not sure if  
it's worth applying to 2.6/2.7/3.0.


Steven


This message was sent using IMP, the Internet Messaging Program.

Index: src/wccp2.c
===
RCS file: /cvsroot/squid/squid/src/wccp2.c,v
retrieving revision 1.29
diff -u -u -r1.29 wccp2.c
--- src/wccp2.c 26 Dec 2007 23:52:02 -  1.29
+++ src/wccp2.c 25 Jan 2008 01:10:55 -
@@ -361,6 +361,7 @@
 /* END WCCP V2 */
 void wccp2_add_service_list(int service, int service_id, int service_priority,
 int service_proto, int service_flags, int ports[], int security_type, char 
*password);
+static void wccp2SortCacheList(struct wccp2_cache_list_t *head);
 
 /*
  * The functions used during startup:
@@ -1166,6 +1167,8 @@
found = 1;
num_caches = 1;
 }
+wccp2SortCacheList(router_list_ptr-cache_list_head);
+
 router_list_ptr-num_caches = htonl(num_caches);
 
 if ((found == 1)  (service_list_ptr-lowest_ip == 1)) {
@@ -1913,6 +1916,39 @@
 }
 }
 
+static void
+wccp2SortCacheList(struct wccp2_cache_list_t *head) {
+struct wccp2_cache_list_t tmp;
+struct wccp2_cache_list_t *this_item;
+struct wccp2_cache_list_t *find_item;
+struct wccp2_cache_list_t *next_lowest;
+
+/* Go through each position in the list one at a time */
+for (this_item = head; this_item-next; this_item = this_item-next) {
+   /* Find the item with the lowest IP */
+   next_lowest = this_item;
+
+   for(find_item = this_item; find_item-next; find_item = 
find_item-next) {
+   if(find_item-cache_ip.s_addr  next_lowest-cache_ip.s_addr) {
+   next_lowest = find_item;
+   }
+   }
+   /* Swap if we need to */
+   if(next_lowest != this_item) {
+   /* First make a copy of the current item */
+   memcpy(tmp, this_item, sizeof(struct wccp2_cache_list_t));
+
+   /* Next update the pointers to maintain the linked list */
+   tmp.next=next_lowest-next;
+   next_lowest-next=this_item-next;
+
+   /* Finally copy the updated items to their correct location */
+   memcpy(this_item, next_lowest, sizeof(struct wccp2_cache_list_t));
+   memcpy(next_lowest, tmp, sizeof(struct wccp2_cache_list_t));
+   }
+}
+}
+
 void
 free_wccp2_service_info(void *v)
 {


Re: tproxy caching?

2008-01-28 Thread Steven Wilton

Quoting Adrian Chadd [EMAIL PROTECTED]:


I've got tproxy + squid-2.7 here and I noticed that some stuff wasn't
being cached after I unsubtly made the content cachable.

The problem is repaired here:

Index: forward.c
===
RCS file: /cvsroot/squid/squid/src/forward.c,v
retrieving revision 1.131
diff -u -r1.131 forward.c
--- forward.c   5 Sep 2007 20:03:08 -   1.131
+++ forward.c   20 Jan 2008 06:47:17 -
@@ -712,7 +712,7 @@
  * peer, then don't cache, and use the IP that the client's DNS lookup
  * returned
  */
-if (fwdState-request-flags.transparent  fwdState-n_tries   
 (NULL == fs-peer)) {
+if (fwdState-request-flags.transparent  (fwdState-n_tries   
 1)  (NULL == fs-peer)) {

storeRelease(fwdState-entry);
commConnectStart(fd, host, port, fwdConnectDone, fwdState,   
fwdState-request-my_addr);

 } else {

The problem is that n_tries is always going to be 1 at this point,   
even before

it attempts a new connection, and stuff is just suddenly uncachable.

Am I on the right track?


The patch looks good to me.

Steven


This message was sent using IMP, the Internet Messaging Program.




request-my_addr change from 2.6.STABLE3 to 2.6.STABLE10

2007-03-18 Thread Steven Wilton
I've just discovered that due to the reworking of places that
ClientNatLookup is called, request-my_addr now contains the local IP of the
proxy server rather than the IP address of the web server the client thinks
it's talking to for transparent requests.  I'm assuming that this was done
deliberately, but it has caused a regression for a patch that I'm using.

Can anyone confirm whether request-my_addr should contain the IP address
that the customer has connected to, or the IP address that the OS has
redirected the packet to for requests that have been semnt to squid using
NAT.  (ie client sends request to 66.102.7.104:80 proxy redirects to
192.168.0.1:3128.  Should request-my_addr contian the 66.102.7.104 IP, or
192.168.0.1)?

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.13/726 - Release Date: 18/03/2007
3:34 PM
 



RE: request-my_addr change from 2.6.STABLE3 to 2.6.STABLE10

2007-03-18 Thread Steven Wilton
 -Original Message-
 From: Steven Wilton [mailto:[EMAIL PROTECTED] 
 Sent: Monday, 19 March 2007 8:46 AM
 To: squid-dev@squid-cache.org
 Subject: request-my_addr change from 2.6.STABLE3 to 2.6.STABLE10
 
 I've just discovered that due to the reworking of places that
 ClientNatLookup is called, request-my_addr now contains the 
 local IP of the
 proxy server rather than the IP address of the web server the 
 client thinks
 it's talking to for transparent requests.  I'm assuming that 
 this was done
 deliberately, but it has caused a regression for a patch that 
 I'm using.
 
 Can anyone confirm whether request-my_addr should contain 
 the IP address
 that the customer has connected to, or the IP address that the OS has
 redirected the packet to for requests that have been semnt to 
 squid using
 NAT.  (ie client sends request to 66.102.7.104:80 proxy redirects to
 192.168.0.1:3128.  Should request-my_addr contian the 
 66.102.7.104 IP, or
 192.168.0.1)?
 

I just figured it out - the change was that clientNatLookup was only being
called if the host header was not given.  I've modifed my patch accordingly.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.13/726 - Release Date: 18/03/2007
3:34 PM
 



A few patches

2007-03-13 Thread Steven Wilton
I've attached 3 patches to this message for comment.

The first patch (transparent-pipeline.patch) is simple - I'd like to allow NTLM 
auth to work even when pipelined requests are enabled, but only for transparent 
requests.  I think that this is a safe option, as the web browser thinks it's 
talking directly to the web server for transparent requests.

The second patch (transparent-dns-hint.patch) is designed to use the 
destination IP that the client was attempting to connect to as the server IP if 
DNS lookup fail (for a transparent request).  storeRelease is called as soon as 
possible in forward.c to stop the object from being cached.  This allows 
customers to use unofficial DNS servers, or even entries in /etc/hosts to visit 
web sites through squid, while still maintaining the integrity of cached 
objects (by not caching the objects).

The third patch (transparent-pipeline.patch) is designed to allow squid to 
handle non-http traffic.  If a request can not be decoded by squid, and it was 
a transparently intercepted requets, it will be transformed to a CONNECT 
request to the server that the client was trying to contact, and all data will 
be passed to/from the server untouched by squid.  (I have a second copy of this 
patch that has been tested, and I can confirm that it works when patched 
against squid 2.6.10.  The attached patch was created against the CVS tree of 
2.6, and does need testing).



The idea behind all of the above patches is to try and make squid as 
transparent as possible when in transparent mode (ie performance and behaviour 
on port 80 should be the same as it would be with no proxy).

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.10/720 - Release Date: 12/03/2007 
7:19 PM
 
  


transparent-pipeline.patch
Description: transparent-pipeline.patch


transparent-dns-hint.patch
Description: transparent-dns-hint.patch


transparent-nonhttp.patch
Description: transparent-nonhttp.patch


RE: A few patches

2007-03-13 Thread Steven Wilton
 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, 13 March 2007 4:59 PM
 To: Steven Wilton
 Cc: 'Adrian Chadd'; squid-dev@squid-cache.org
 Subject: Re: A few patches
 
 On Tue, Mar 13, 2007, Steven Wilton wrote:
 
  Good point.  The only problem is that (under Linux at 
 least) we can't find
  out the original destination port (ie if traffic destined 
 for port 80 is
  redirected to port 3128).  Would you suggest this as a 
 configuration option
  on a per-port basis? (ie squid can listen to multiple 
 ports, and the port
  that the connection arrives on is used to determine the 
 destination port).
 
 What, this isn't accessible via clientNatLookup() ? Hm! I'm 
 sure I've seen
 it supported somehow/somewhere.

It looks like BSD may support this, but the Linux NAT lookup does not write
the destination port into the struct (I checked using gdb).  I'm interested
to see if anyone knows another way, otherwise the only way I can see this
working is using a per-port configuration option.

Steven


-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.11/721 - Release Date: 13/03/2007
4:51 PM
 



RE: A few patches

2007-03-13 Thread Steven Wilton

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, 14 March 2007 9:30 AM
 To: Steven Wilton
 Cc: 'Adrian Chadd'; squid-dev@squid-cache.org
 Subject: RE: A few patches
 
 tis 2007-03-13 klockan 15:46 +0900 skrev Steven Wilton:
 
  Good point.  The only problem is that (under Linux at 
 least) we can't find
  out the original destination port (ie if traffic destined 
 for port 80 is
  redirected to port 3128).
 
 conn-me has the original IP and port in transparently intercepted
 connections.

I ran squid under gdb, and I was seeing the IP address being updated in the
conn-me structure, but not the port.  I'll re-work the patch on this basis.

Steven.

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.11/721 - Release Date: 13/03/2007
4:51 PM
 



RE: A few patches

2007-03-12 Thread Steven Wilton
 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, 13 March 2007 3:14 PM
 To: Steven
 Cc: squid-dev@squid-cache.org
 Subject: Re: A few patches
 
 On Tue, Mar 13, 2007, Steven wrote:
 
 
 This bit is clever! Don't use a CONNECT to port 80 though; 
 try to find out which port
 it was connecting to in the first place and append that. It 
 won't always be port 80.
 (Imagine if someone wanted to feed more than just port 80 
 through Squid transparently;
 the current code handles that.)

Good point.  The only problem is that (under Linux at least) we can't find
out the original destination port (ie if traffic destined for port 80 is
redirected to port 3128).  Would you suggest this as a configuration option
on a per-port basis? (ie squid can listen to multiple ports, and the port
that the connection arrives on is used to determine the destination port).

 Make this configurable though. You don't want to allow people 
 to tunnel non-resolvable
 stuff through without the administrator explicitly deciding to.

You need to have an ACL that allows CONNECT requests destined for port 80,
otherwise you will get an ACL denied message :)

 Nah, just extend commConnectStart() and don't bother with the 
 commConnectStart2() stuff.
 I admit I'm guilty of this kind of thing but it should only 
 be temporary; never
 permanent.

If there's no objections to applying this change (in principle), I'll
re-work it to extend commConnectStart().

 Nice work though!

Thanks



Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.10/720 - Release Date: 12/03/2007
7:19 PM
 



RE: squid3 comments

2007-03-01 Thread Steven Wilton
 -Original Message-
 From: Guido Serassio [mailto:[EMAIL PROTECTED] 
 Sent: Friday, 2 March 2007 5:51 AM
 To: Henrik Nordstrom; Jeremy Hall
 Cc: Squid Developers
 Subject: Re: squid3 comments
 
 
 I have always hoped than some other developer with a more strong C++ 
 knowledge will try to fix them, but this was never happened: all 
 Squid 3 peoples seems to like more the development of new features 
 instead of fixing bugs.
 
 As example: I have arranged the TPROXY forward port done by Steven in 
 November, but no one of the Squid 3 supporters seems to have tested 
 it giving any kind of feedback, a little disappointing ... :-(
 So, sometimes I ask to me why I'm still wasting my time on 
 Squid 3 ?.
 


I know this is a minor problem, but I had problems getting the squid3
bootstrap.sh script to run, so I couldn't test the patch.  I'm pretty sure
it's got something to do with the version of automake and autoconf on my
system, but I couldn't find a reference to which versions I needed for
squid3.

I really think that if squid3 needs a specific version of these programs for
the bootstrap.sh script to run, then the script should check for the
required version.  From what I can tell the current script looks for a range
of versions, most of which will not work.

Can anyone tell me which versions of automake / autoconf are required for
squid3?

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.5/707 - Release Date: 1/03/2007
2:43 PM
 



RE: cvs commit: squid/src wccp2.c

2006-10-25 Thread Steven Wilton
Oops, I assumed that because the bug reporter had applied the patch it
would compile correctly.

I've now tested and comitted a new version that compiles :)

Steven

 -Original Message-
 From: Guido Serassio [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, 26 October 2006 3:15 AM
 To: Squid Developers
 Cc: [EMAIL PROTECTED]
 Subject: Re: cvs commit: squid/src wccp2.c
 
 Hi Steven,
 
 Il 08.26 24/10/2006 Steven Wilton ha scritto:
 swilton 2006/10/24 00:26:51 MDT
 
Modified files:
  src  wccp2.c
Log:
Bug #a 
  
 href=http://www.squid-cache.org/bugs/show_bug.cgi?id=1790;1790/a 
  : Crash on wccp2 + mask assignement + standard wccp service
 
The origianl wccp2 mask assignment did not account for the use of 
  the standard
wccp service in the mask assignment code.
 
Revision  ChangesPath
1.27  +4 -4  squid/src/wccp2.c
 
 There is some problem in your patch:
 
 if gcc -DHAVE_CONFIG_H 
 -DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I. 
 -I../include -I. -I. -I../include -I../include-m32 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -g -O2 -D_REENTRANT 
 -MT wccp2.o -MD -MP -MF .deps/wccp2.Tpo -c -o wccp2.o wccp2.c; \
 then mv -f .deps/wccp2.Tpo .deps/wccp2.Po; else rm -f 
 .deps/wccp2.Tpo; exit 1; fi
 wccp2.c: In function `wccp2Init':
 wccp2.c:625: error: `service' undeclared (first use in this function)
 wccp2.c:625: error: (Each undeclared identifier is reported only once
 wccp2.c:625: error: for each function it appears in.)
 wccp2.c: In function `wccp2AssignBuckets':
 wccp2.c:1450: error: `service' undeclared (first use in this function)
 make[3]: *** [wccp2.o] Error 1
 make[3]: Leaving directory `/home/serassio/2.6/src'
 make[2]: *** [all-recursive] Error 1
 make[2]: Leaving directory `/home/serassio/2.6/src'
 make[1]: *** [all] Error 2
 make[1]: Leaving directory `/home/serassio/2.6/src'
 make: *** [all-recursive] Error 1
 
 Regards
 
 Guido
 
 
 -
 
 Guido Serassio
 Acme Consulting S.r.l. - Microsoft Certified Partner
 Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
 Tel. : +39.011.9530135  Fax. : +39.011.9781115
 Email: [EMAIL PROTECTED]
 WWW: http://www.acmeconsulting.it/
 
 
 -- 
 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.1.408 / Virus Database: 268.13.11/496 - Release 
 Date: 24/10/2006
  
 

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.408 / Virus Database: 268.13.11/497 - Release Date: 25/10/2006
 



RE: 2.6.STABLE5 approaching, help needed to test patches

2006-10-23 Thread Steven Wilton
 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, 24 October 2006 5:40 AM
 To: Squid Developers
 Cc: Steven
 Subject: Re: 2.6.STABLE5 approaching, help needed to test patches
 
 mån 2006-10-23 klockan 00:00 +0200 skrev Henrik Nordstrom:
  2.6.STABLE5 is approaching. Estimated release date next weekend.
  
  Part of this I'd like some help to verify the patch for Bug 
 #1779 (aka
  the commloops-2_6 branch) before that. This patch attempts 
 to fix delay
  pools again, providing fairness between multiple 
 connections fighting
  for bandwidth. This also allows poll/select to be converted 
 to the new
  comm frameworks ensuring that all 4 comm loops works the same..
 
 Now in Squid-2.6. But could use some more testing still..  I am fairly
 confident it will work in all the comm loops, but...
 
 Looking in Bugzilla I find Bug #1790, WCCP2 assignment method. Steven,
 is this patch ready for commit?
 
 Any other takers for bugfixes for 2.6.STABLE5? You'll have until
 wednesday evening. After that the tree will be locked for release.
 (which means don't commit anything after that until the 
 release without
 an OK from me).
 

I'll commit it, as there's no reason I can give for the second crash unless
something else was changed.

Steven.

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.408 / Virus Database: 268.13.9/490 - Release Date: 20/10/2006
 



RE: one more squid-2.6 rel?

2006-09-22 Thread Steven Wilton
 

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, 21 September 2006 4:12 PM
 To: Steven Wilton
 Cc: squid-dev@squid-cache.org
 Subject: RE: one more squid-2.6 rel?
 
 tor 2006-09-21 klockan 07:57 +0800 skrev Steven Wilton:
 
  OK, here it is.  It compiles cleanly now, and worked fine 
 before I had to
  merge it with the weight support patch that was applied 3 
 days ago.  I'm
  going to upgrade our caches to the latest 2.6 snapshot + 
 this patch and test
  with both hash and mask assignments.
  
  If it works fine, I'll commit later today.
 
 Looks quite fine. Still some blanks to fill in about the undocumented
 mask assignment identity fields. Maybe things will clear up 
 over time..
 
 But squid.conf notes on wccp2_assignment_method is a little 
 confusing..
 
 + Currently (as of IOS 12.4) cisco routers only support 
 assignment.
 + Cisco switches support the L2 redirect assignment method.
 
 Think this means to read something like:
 
   Currently (as of IOS 12.4) cisco routers only support 
 hash assignment
   and Cisco switches only support mask assignment method.
 

Thanks, I've just comitted a patch to fix this comment.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.7/454 - Release Date: 21/09/2006
 



RE: one more squid-2.6 rel?

2006-09-20 Thread Steven Wilton
 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, 20 September 2006 8:27 PM
 To: Steven Wilton
 Cc: squid-dev@squid-cache.org
 Subject: RE: one more squid-2.6 rel?
 
 ons 2006-09-20 klockan 09:13 +0800 skrev Steven Wilton:
  I'm about to get MASK assignment working in wccp2. I'd like 
 to see that in
  the next release if possible.
 
 Cool.
 
 If it's in the next, or the release after that depends on when you
 finish and how complex the change is to review..
 
 But it will get accepted into 2.6 when you are finished and the review
 doesn't find any bad things (not that I think it will find 
 any badness).
 

OK, here it is.  It compiles cleanly now, and worked fine before I had to
merge it with the weight support patch that was applied 3 days ago.  I'm
going to upgrade our caches to the latest 2.6 snapshot + this patch and test
with both hash and mask assignments.

If it works fine, I'll commit later today.

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.5/451 - Release Date: 19/09/2006
 
  


wccp-assignment-head.patch
Description: Binary data


RE: one more squid-2.6 rel?

2006-09-19 Thread Steven Wilton
I'm about to get MASK assignment working in wccp2. I'd like to see that in
the next release if possible.

Steven

 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, 20 September 2006 9:03 AM
 To: squid-dev@squid-cache.org
 Subject: one more squid-2.6 rel?
 
 Hiya,
 
 What do you all think about another squid-2.6 release? A few bugfixes
 have gone into Squid-2.6. It'd also be good to say this 
 stable release
 has stable COSS support.
 
 
 
 
 Adrian
 
 
 -- 
 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.1.405 / Virus Database: 268.12.5/451 - Release 
 Date: 19/09/2006
  
 

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.5/451 - Release Date: 19/09/2006
 



Re: Another COSS patch

2006-08-21 Thread Steven Wilton


- Original Message - 
From: Steven Wilton [EMAIL PROTECTED]
To: 'Guido Serassio' [EMAIL PROTECTED]; 'Adrian Chadd' 
[EMAIL PROTECTED]

Cc: squid-dev@squid-cache.org
Sent: Tuesday, August 15, 2006 8:25 AM
Subject: RE: Another COSS patch





What would be good now is a configuration guide for COSS
just so people
have some idea of how to configure, use, troubleshoot and tune it.
COSS has quite a lot more knobs now than it did when i
inherited it and
its bound to generate a lot of questions once people realise
it performs
better than UFS.

A wiki page  :-)


I'll give it a shot.



As promised, I've added a WIKI page in the FAQ section.


Steven 



RE: Another COSS patch

2006-08-15 Thread Steven Wilton
 -Original Message-
 From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, 15 August 2006 5:28 PM
 To: Henrik Nordstrom
 Cc: Adrian Chadd; Steven; squid-dev@squid-cache.org
 Subject: Re: Another COSS patch
 
 On Tue, Aug 15, 2006, Henrik Nordstrom wrote:
  On Mon, 2006-08-14 at 17:00 +0800, Adrian Chadd wrote:
  
   What would be good now is a configuration guide for 
 COSS just so people
   have some idea of how to configure, use, troubleshoot and tune it.
   COSS has quite a lot more knobs now than it did when i 
 inherited it and
   its bound to generate a lot of questions once people 
 realise it performs
   better than UFS.
  
  Have you looked at the documentation update Reuben sent?
 
 Yup; then Steven went off and tweaked it all again!
 

I'm not sure what document you're referring to.  Is there another document
that should be updated?  Or is the wiki the best place?

Regards

Steven


-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.10.10/419 - Release Date: 15/08/2006
 



RE: Another COSS patch

2006-08-14 Thread Steven Wilton

 What would be good now is a configuration guide for COSS 
 just so people
 have some idea of how to configure, use, troubleshoot and tune it.
 COSS has quite a lot more knobs now than it did when i 
 inherited it and
 its bound to generate a lot of questions once people realise 
 it performs
 better than UFS.
 
 A wiki page  :-)

I'll give it a shot.

 Steven, Adrian a question for you:
 
 I have waited the ending of your COSS work before do a Windows port 
 using native Windows overlapped I/O. Now it seems to me that the COSS 
 code is really close to its definitive structure.
 Do you can confirm this, or there are still some enhancement pending ?

I've got no further work planned for COSS at the moment.  The only thing is
that we are still seeing WARNING: failed to unpack meta data entries in
our cache log after restarting squid with COSS partitions.  These do not
appear to cause any problems, but their cause does need to be investigated.

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.10.9/417 - Release Date: 11/08/2006
 



Squid 2.6 STABLE2 + COSS

2006-07-31 Thread Steven Wilton
I've been doing a fair amount of work on the COSS code in squid 2.6, and 
I've been testing it under fairly heavy load, and appear to have sorted out 
a majority of the bugs.  I think that the code would be useful to include in 
squid 2.6STABLE2, but I have only just put the latest patch on test tonight. 
I'd like to test the patch more before submitting it, but I wanted to see 
what the timeline would need to be for a chance of inclusion in 2.6STABLE2.


The code does only touch the coss directory (except for the comments in 
cf.data.pre) if that makes any difference.


regards

Steven 



Re: Squid 2.6 STABLE2 + COSS

2006-07-31 Thread Steven Wilton



I've been doing a fair amount of work on the COSS code in squid 2.6, and 
I've been testing it under fairly heavy load, and appear to have sorted 
out a majority of the bugs.  I think that the code would be useful to 
include in squid 2.6STABLE2, but I have only just put the latest patch on 
test tonight. I'd like to test the patch more before submitting it, but I 
wanted to see what the timeline would need to be for a chance of inclusion 
in 2.6STABLE2.


The code does only touch the coss directory (except for the comments in 
cf.data.pre) if that makes any difference.




Or maybe I should check whether it's been released before posting :)



COSS Crash + WCCP while rebuilding

2006-07-27 Thread Steven Wilton
I'd like to propose the attached patches for squid.

The first is a config option for wccp2 to make squid wait until all
cache_dirs have finished rebuilding before squid will register itself with
WCCP.  This will allow the rebuild to happen quickly, and avoid slow web
requests while the cache rebuilds.

The second is a trivial patch to the storeSwapMetaUnpack() function to make
it initialise a variable.  This was causing crashes with COSS when the first
object in the buffer was broken.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.4/401 - Release Date: 26/07/2006
 

attachment: winmail.dat

RE: Tproxy patch

2006-07-17 Thread Steven Wilton
 Hmm.. who is redefining __FD_SETSIZE under our feets? It's already
 defined by including squid.h. Ah, linux/posix_types.h has obsolete
 kernel definitions.. (the kernel no longer uses fd_set). It's
 __kernel_fd_set definition also gets wrong, but this type is 
 not used by
 anyone so...
 
 Redefining it like this isn't entirely safe as there may have 
 been type
 declarations dependent on in in the included headers. But hopefully
 those where included by squid.h before it got redefined...
 
 Ah, there it is. We should be including sys/capability.h, not
 linux/capability.h. The sys header already have the needed glue to not
 collide with glibc. Fixed.

I'm compiling under Debian (stable and unstable), and we are still seeing
the fd set limited to 1024.  It's coming in the following path:

/usr/include/sys/capability.h
/usr/include/linux/types.h
/usr/include/linux/posix_types.h

Is it safe to include these all headers in squid.h before __FD_SETSIZE is
redefined?  Or is this specific to the debian include files?

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.1/389 - Release Date: 14/07/2006
 



RE: Tproxy patch

2006-07-17 Thread Steven Wilton
 That would be a bit messy. The problem is that those two linux headers
 isn't supposed to be included at all in userspace applications (only
 kernel). glibc provides it's own types.
 
 I suppose we could use the same glue as Fedora already has in it's
 sys/capability.h...
 
 --- libcap-1.10/libcap/include/sys/capability.h.foo Fri Nov  9
 16:26:25 2001
 +++ libcap-1.10/libcap/include/sys/capability.h Fri Nov  9 
 16:28:47 2001
 @@ -21,6 +21,16 @@
   */
 
  #include sys/types.h
 +#include stdint.h
 +
 +/*
 + * Make sure we can be included from userland by preventing
 + * capability.h from including other kernel headers
 + */
 +#define _LINUX_TYPES_H
 +#define _LINUX_FS_H
 +typedef unsigned int __u32;
 +
  #include linux/capability.h
 
  /*
 
 
 or maybe move all capability related code out to a separate file.
 

You mean something similar to the attached patch?

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.1/389 - Release Date: 14/07/2006
 
  


capabilities.patch
Description: Binary data


Tproxy patch

2006-07-11 Thread Steven Wilton
I've just been looking at installing squid2.6 on our proxy servers, but came
across a couple of problems.  The attached patch fixes these.  The first
part of the patch enables NTLM auth even when pipeline_prefetch is enabled.
I've just had a quick check, and it looks like this is not a problem (at
least when the request is transparent).  There may be something I've not
considered, and I can understand if this part of the patch is not applied,
but I would be interested to hear why.

The second part stops squid from sending bad headers for NTLM authenticated
requests on transparent connections (due to the addition of the transparent
flag in squid 2.6).

The third part of the patch allows squid to increase the number of fd's
beyond 1024 when tproxy is enabled.  It looks like a different set of logic
has been applied to tools.c to include sys/capability.h and sys/prctl.h.
I don't know if this will work in main.c.  Applying the same include logic
to main.c may be considered a better solution.

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.10/385 - Release Date: 11/07/2006
 
  


tproxy-fixes.patch
Description: Binary data


RE: [Devel] Re: [squid-users] TPROXY on squid-2.6S1

2006-07-11 Thread Steven Wilton
I'm using Debian 3.1 (sarge) with a 2.6.15.6 + cttproxy patch.

I've attached a patch that fixes the 1024 fd bug, an NTLM auth bug, and
allows NTLM auth to work with pipeline prefetching on.  These problems
should be fixed in the next squid release.

I would like to add the following to my previous list of requirements for
tproxy + wccpv2:
- You must make sure rp_filter is disabled in the kernel
- You must make sure ip_forwarding is enabled in the kernel



Can you please check that you've enabled ip_forwarding in your kernel.  If
that doesn't work, I don't know if the vhost vport=80 is required in the
http_port line in the squid config (we don't have these options enabled on
our proxies).  

I use the ip_wccp module to make the kernel handle the GRE packets correctly
(which works slightly differently from the ip_gre module).  Do you have a
GRE tunnel set up in linux?  If so, what command are you running to set it
up?  I don't have an example to give you here, but I'm sure other people are
using the ip_gre module with wccp to handle the GRE packets, and should be
able to help.

Regards
Steven

 -Original Message-
 From: tino [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, 12 July 2006 12:53 PM
 To: Steven Wilton; 'Adrian Chadd'
 Cc: 'Kashif Ali Bukhari'; [EMAIL PROTECTED]; 'chima s'
 Subject: Re: [Devel] Re: [squid-users] TPROXY on squid-2.6S1
 
 Hi, Steven,
 Many2 thank for your config   I will immediate hands-on my squid box
 
 May I know your distro  kernel version  ? (for shortcut, 
 incase, I am using 
 fedora4 upgraded to kernel-2.6.15.7 with 
 cttproxy-2.6.15-2.0.4 patch from 
 balabit )
 
 Based-on cachemgr, at least we need 2000-3000 filedescriptor
 
 
 this is my last config which not work :
 
 I saw wccp hit increments at router, by redirect packet to squid-box .
 Service Identifier: 80
 Number of Cache Engines: 1
 Number of routers:   1
 Total Packets Redirected:1123
 Redirect access-list:155
 Total Packets Denied Redirect:   650922
 Total Packets Unassigned:25043
 Group access-list:   -none-
 Total Messages Denied to Group:  0
 Total Authentication failures:   0
 
 Service Identifier: 90
 Number of Cache Engines: 1
 Number of routers:   1
 Total Packets Redirected:224
 Redirect access-list:156
 Total Packets Denied Redirect:   206844
 Total Packets Unassigned:17095
 Group access-list:   -none-
 Total Messages Denied to Group:  0
 Total Authentication failures:   0
 
 I saw hit increments in iptables :
 Chain PREROUTING (policy ACCEPT 11517 packets, 2009K bytes)
  pkts bytes target prot opt in out source destination
76 24942 TPROXY all  --  anyany anywhere 
 anywhere TPROXY 
 redirect 0.0.0.0:3128
 
 But still no hit at access.log, and my host still can't open the web
 
 My last squid-box config :
 
 #iptables :
 iptables -t tproxy -A PREROUTING -j TPROXY --on-port 3128
 
 #part squid.conf :
  http_port 3128 transparent tproxy vhost vport=80
  always_direct allow all
  wccp2_router y.y.y.y
  wccp2_forwarding_method 1
  wccp2_return_method 1
  wccp2_service dynamic 80
  wccp2_service dynamic 90
  wccp2_service_info 80 protocol=tcp flags=dst_ip_hash 
 priority=240 ports=80
  wccp2_service_info 90 protocol=tcp flags=src_ip_hash,ports_source
 priority=240 ports=80
 
  #part of my cisco config:
  ip wccp 80 redirect-list 155
  ip wccp 90 redirect-list 156
  int fasteth0 ip wccp 80 redirect out (gateway to internet)
  int fasteth1 ip wccp 90 redirect out (my client gateway)
  int fasteth3 ip wccp redirect exclude in  (squid-box attached here)
 access-list 155 permit ip host x.x.x.x any
 access-list 156 permit ip any host x.x.x.x
 
 #modules:
 [EMAIL PROTECTED] sbin]# lsmod
 Module  Size  Used by
 ipt_TPROXY  2176  1
 iptable_tproxy 17708  1
 ip_nat 18604  1 iptable_tproxy
 ip_conntrack   49836  2 iptable_tproxy,ip_nat
 ip_tables  20096  2 ipt_TPROXY,iptable_tproxy
 ip_gre 13472  0
 
 #sysctl:
 [EMAIL PROTECTED] sbin]# sysctl -a | grep rp.filter
 net.ipv4.conf.gre0.arp_filter = 0
 net.ipv4.conf.gre0.rp_filter = 0
 net.ipv4.conf.eth0.arp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0
 net.ipv4.conf.default.arp_filter = 0
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.arp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.lo.arp_filter = 0
 net.ipv4.conf.lo.rp_filter = 0
 
 
 many thanks  regards,
 Tino
 
 - Original Message - 
 From: Steven Wilton [EMAIL PROTECTED]
 To: 'Adrian Chadd' [EMAIL PROTECTED]; 'tino' 
 [EMAIL PROTECTED]
 Cc: 'Kashif Ali Bukhari' [EMAIL PROTECTED]; 
 [EMAIL PROTECTED]; 
 'chima s' [EMAIL

RE: Bug #1616: assertion failed:comm_generic.c:65:F-flags.openonstoreResumeFD.

2006-06-26 Thread Steven Wilton
 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Monday, 26 June 2006 9:58 PM
 To: Steven Wilton
 Cc: 'Squid Developers'
 Subject: RE: Bug #1616: assertion 
 failed:comm_generic.c:65:F-flags.openonstoreResumeFD.
 
 mån 2006-06-26 klockan 12:22 +0800 skrev Steven Wilton:
 
  Are timeouts only looked at in the checkTimeouts() 
 function?  This resumes
  any fd's before running the timeout function, which should 
 avoid the error
  condition.
 
 Yes, but it's only the fd which is resumed there, not the StoreEntry..
 
 Regards
 Henrik
 

In the original code for 2.5, I had the following logic in commResumeFD() to
handle this situation

if(!(F-read_handler) || !(F-epoll_backoff)) {
debug(5, 2) (commResumeFD: fd=%d ignoring read_handler=%p,
epoll_backoff=%d\n,fd,F-read_handler,F-epoll_backoff);
F-epoll_backoff=0;
return;
}

I'm pretty sure I set the debug level to 2 because I was seeing hits to this
bit of code, but because it was being handled I was not worried about these
messages.

What about removing the backoff flag from the fde struct in fd_close(), and
removing the assert(flags.open) from commResumeFD.  This way, if a fd is
backed off, then closed, the backoff flag will be removed.  We are then left
with 2 possible situations:

- If the fd is backed off again, the flag will be re-added, and the fd will
be removed from the set of polled fd's.
- If the fd is not backed off again, then commResume will see that the
backoff flag is not set, and not actually do any work.

Even if commResume is called on an already open/closed FD, it will not do
any harm, as commUpdateEvents will only add/remove the fd if the read/write
handlers are set.

I've attached a patch which fixes the problem as described above.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.4/375 - Release Date: 25/06/2006
 
  


backoff.patch
Description: Binary data


Fwdstats serverfd (Patch)

2006-06-25 Thread Steven Wilton
When I did the original serverfd work for backed off connections, I assumed
serverfd=0 was invalid.  This patch fixes the code so serverfd=-1 is the
default when there is no backed-off server fd.

I've also added an assert into fd_close to catch any cases where a fd is
closed while it is backed off.  This condition can cause crashes later on
(Bug #1616 is an example) if not detected.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.4/375 - Release Date: 25/06/2006
 
  


serverfd.patch
Description: Binary data


RE: Connection pinning (patch)

2006-06-06 Thread Steven Wilton
 

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Saturday, 3 June 2006 6:39 AM
 To: Steven Wilton
 Cc: squid-dev@squid-cache.org
 Subject: Re: Connection pinning (patch)
 
 tor 2006-06-01 klockan 19:46 +0800 skrev Steven Wilton:
  I've attached a re-worked conneciton pinning patch, which  
 I believe fixes 
  all the previous concerns with the previous connection 
 pinning patch. 
  Please let me know if you can see any problems with this.
 
 Don't seem to handle the case where the server FD is closed first very
 well.. or at least I don't see any code unregistering this FD from the
 client fd on close..
 
 Also I am still not convinced we really need to support more than one
 pinned server FD per client connection. Is the clients really 
 expecting
 to be able to switch between multiple authenticated sessions to
 different servers on the same connection?
 
 Regards
 Henrik

If the server fd is closed, the client pconnLookup will fail, and the client
will re-connect.

The code in comm.c uses the timeout handler to cause the pconn to close when
the client fd is closed.  If the server connection has been closed, the
timeout handler will be NULL, so there will be no work to do when the client
fd closes.  It also record which client fd each server fd has been pinned
to, which will avoid any problems if the server fd is re-used on another
request.

I have run tests, and can confirm that when proxies are set in the web
browser, the same client-side fd will be used for multiple requests to
different server-side fd's.  There are 2 clean ways to handle this, we can
either shut down any existing pinned server connection if another request
needs to be pinned to the same client fd, or allow multiple server fd's to
be pinned to the same client fd.  You're probably right that clients will
not usually be actively using multiple pinned connections simultaneously,
but because it is a possibility, and it's easy enough to make work, I don't
see the harm in letting it work.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.8.2/356 - Release Date: 5/06/2006
 



Re: Connection pinning (patch)

2006-06-01 Thread Steven Wilton
I've attached a re-worked conneciton pinning patch, which  I believe fixes 
all the previous concerns with the previous connection pinning patch. 
Please let me know if you can see any problems with this.


This version moves some of the pconnPush logic around to make it similar to 
the pconnPop logic (and also moves the TPROXY case to handle peers better), 
and at the same time it will always store the destination host and port for 
a pinned connection (even when the conneciton is going to a peer).  It also 
does a pconnLookup for pinned client-side connections before going through 
the peer selection logic, and bypasses peer selection if it detects a pinned 
server connection for the request.  Finally, I've added server and client fd 
information into the fde struct, and then made comm_close close all server 
pinned fd's when it closes the client fd.


It works for me both when I set 4 parent caches, and where I have no parents 
defined.




Regards
Steven

- Original Message - 
From: Henrik Nordstrom [EMAIL PROTECTED]

To: Steven Wilton [EMAIL PROTECTED]
Cc: squid-dev@squid-cache.org
Sent: Friday, May 26, 2006 8:53 PM
Subject: RE: Connection pinning (patch)

fre 2006-05-26 klockan 09:22 +0800 skrev Steven Wilton:


The current code will not that there is a pinned connection if a peer is
used, as the pconn connection is stored under the key of the peer and
not the requested site.

Even with the proposed changes it won't work if there is more than one
peer. To work when there is more than one peer the peer selection logics
must me shortcircuited.


Will the current 2.6 code handle this case gracefully?


The current code will only work in the direct case, not when using even
a single parent. I haven't analyzed in detail what will happen, but it
ain't going to be good..


Wouldn't the connections need to be pinned when the server indicates
connection oriented auth so the client's request can go back to the same
server-side connection?


Exacly, but only in response to client initiated connection oriented
auth. Hmm.. actually not, the criteria is the client side, not server
side. Connections need to be pinned when the client indicates connection
oriented auth.

If the client request was not carrying any connection oriented auth then
the message is just an announce that the server is willing to do
connection oriented auth. At least in the NTLM and Negotiate schemes we
care about.


Thinking. The correct method is to enable pinning of both sides as soon
as connection oriented auth WITH details (not only the scheme name) has
been seen from the client. From this point on until the server
connection is gone or the client requests a different server the
requests should short-circuit peer selection and always end up at this
connection, which may either be a direct connection or a connection via
a specific peer.

If the server connection is lost then the process restarts and the
client connection goes back to unpinned.


A similar scheme is needed for connection oriented proxy authentication,
but here the condition on the requested server does not apply.


pinning-2.6v2.patch
Description: Binary data


Wccp2 config cleanup

2006-05-29 Thread Steven Wilton
Here's a small patch to wccp2 in squid 2.6.  It fixes a difference between
the code and the documentation in the default squid conf (proto= compared to
protocol=).  It also removes the ports_defined config variable, enables
the flag when ports are actually defined and checks to see that this flag is
set before the config is accepted.

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.4/351 - Release Date: 29/05/2006
 
  


wccp2.patch
Description: Binary data


Connection pinning (patch)

2006-05-23 Thread Steven Wilton

Hi,

Adrian asked me to check the connection pinning code in HEAD (as we're 
actually using it on our network), and I can see a couple of problems. 
I've attached a diff that should fix them.


The first part of this patch will make sure we only mark a requset with the 
pinned and auth flags if there is a server-side persistent connection 
waiting.  This will stop extra server-side fd's from being marked as pinned. 
The second part of the patch makes sure the correct code is followed after a 
pconnPop() for pinned and tproxy connections.


Steven 


pinning-2.6.patch
Description: Binary data


Re: Re: problems with the squid-2.5 conn

2006-04-20 Thread Steven Wilton


- Original Message - 
From: Henrik Nordstrom [EMAIL PROTECTED]

To: Steven Wilton [EMAIL PROTECTED]
Cc: squid-dev@squid-cache.org
Sent: Wednesday, April 19, 2006 10:56 PM
Subject: Sv: Re: problems with the squid-2.5 conn


Yes, the Proxy-support header is only relevant when the client is using a 
proxy. Transparent interception is notproxying and should only behave as 
the webserver as that is what it is in the eye of the client.


There is an 'accel' request flag you can use to determine when it is 
appropriate to add the header.


I've made the suggested change, and it looks good now.  If proxies are set, 
then IE will not get confused with the extra header, and if proxies are not 
set the extra headers are not sent.


Do browsers send requests to different servers down the same TCP connection 
when proxies are set?


If browsers do exhibit this behaviour, my patch would cause all subsequent 
requests on the same TCP session to have the auth, pinned and must_keepalive 
flags set.  This is not as much of a problem when using intercept caching 
(as the same TCP conneciton will be going to the same web server), but I'm 
wondering what would happen when proxies are set.


I'm also wondering if your connection pinning work addresses this issue.

regards

Steven 



RE: problems with the squid-2.5 connection pinning

2006-04-18 Thread Steven Wilton

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, 19 April 2006 12:10 AM
 To: Steven Wilton
 Cc: squid-dev@squid-cache.org
 Subject: Re: problems with the squid-2.5 connection pinning
 
 tis 2006-04-18 klockan 08:05 +0800 skrev Steven Wilton:
 
  Due to other changes in the squid source, I needed to set the 
  must_keepalive flag on the request to stop squid from closing the 
  client-side connection
 
 Hmm.. a bit curious on what this might be.  But I guess it's the 
 persistent_connection_after_error directive..

 But I think you are correct. There is little choice but to set
 must_keepalive on pinned connections. Connection semantic is a bit
 different from normal connections.

Yes, it didn't like the initial 403 error, and closed the connection.
 
  and I also had to remove the Connection: 
  Proxy-support header from being sent back to the client 
 (this caused IE to 
  get really confused).
 
 Ugh.. removing this can get you in quite bad situation if 
 there is child
 proxies.
 
 Can you share some more light on this issue?
 
When I was sending the Connection: Proxy-support header, IE only sent the
initial request, and never actually tried to complete the NTLM
authentication handshake.  Removing this header made everything work again.

I still have the Proxy-Support: Session-Based-Authentication header (as
specified in the document fragment that you posted to the list).  I'm not
sure if that makes any difference for child proxies, and IE works both with
and without this header.

regards
Steven

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.385 / Virus Database: 268.4.3/316 - Release Date: 17/04/2006
 



Re: problems with the squid-2.5 connection pinning

2006-04-17 Thread Steven Wilton
- Original Message - 
From: Henrik Nordstrom [EMAIL PROTECTED]

To: Steven Wilton [EMAIL PROTECTED]
Cc: squid-dev@squid-cache.org
Sent: Saturday, April 15, 2006 11:15 PM
Subject: Re: problems with the squid-2.5 connection pinning

lör 2006-04-15 klockan 09:10 +0800 skrev Steven Wilton:


Having seen your patch, I've added the Proxy-Support: headers, and also
added a pinning flag to the request-flags struct to allow 
identification

of a pinned connection.


Looking at your patch I think you got the logics slightly wrong when
adding the flag.

Pinning is a property of the connections, not the individual requests.
From the point where the server connection has indicated use of
Microsoft authentication scheme the server-side connection should be
exclusively reserved for the specific client connection, and requests
from the same client connection should be handled both as pinned looking
for a matching reserved server connection and as authenticated even if
there is no Authorize header (Microsoft authentication only sends
Authorize headers on the first request on the connection, subsequent
requests automatically inherit the same credentials)


Hmm, you're right.  I'll follow the example in your patch to mark the client 
connection as pinned, and use this information to modify the pconn key.


Regards
Steven 



Re: problems with the squid-2.5 connection pinning

2006-04-17 Thread Steven Wilton


- Original Message - 
From: Henrik Nordstrom [EMAIL PROTECTED]

To: Steven Wilton [EMAIL PROTECTED]
Cc: squid-dev@squid-cache.org
Sent: Saturday, April 15, 2006 11:15 PM
Subject: Re: problems with the squid-2.5 connection pinning


lör 2006-04-15 klockan 09:10 +0800 skrev Steven Wilton:


Having seen your patch, I've added the Proxy-Support: headers, and also
added a pinning flag to the request-flags struct to allow 
identification

of a pinned connection.


Looking at your patch I think you got the logics slightly wrong when
adding the flag.

Pinning is a property of the connections, not the individual requests.
From the point where the server connection has indicated use of
Microsoft authentication scheme the server-side connection should be
exclusively reserved for the specific client connection, and requests
from the same client connection should be handled both as pinned looking
for a matching reserved server connection and as authenticated even if
there is no Authorize header (Microsoft authentication only sends
Authorize headers on the first request on the connection, subsequent
requests automatically inherit the same credentials)


Thanks for pointing this out.  I've updated the pinning patch to fix this 
problem, and tested on my home connection.  I can confirm that it works for 
a simple http GET command, and I'll do further testing and update this list 
with the results using frontpage (which uses a variety of other http methods 
to transfer data).


Due to other changes in the squid source, I needed to set the 
must_keepalive flag on the request to stop squid from closing the 
client-side connection, and I also had to remove the Connection: 
Proxy-support header from being sent back to the client (this caused IE to 
get really confused).


regards

Steven



pinning.patch
Description: Binary data


Re: problems with the squid-2.5 connection pinning

2006-04-15 Thread Steven Wilton
I'm planning on deploying this patch out on our servers as soon as I get the 
chance.  I'll let you know how it goes.


Steven

- Original Message - 
From: Adrian Chadd [EMAIL PROTECTED]

To: Steven Wilton [EMAIL PROTECTED]
Cc: squid-dev@squid-cache.org
Sent: Saturday, April 15, 2006 12:53 PM
Subject: Re: problems with the squid-2.5 connection pinning


Are you planning on running this version of the patch (and the tproxy 
support)

on your production caches any time soon?

I'd like to place this on my proxy servers but I don't want to be a beta
tester. Not yet, at least. :)



Adrian

On Sat, Apr 15, 2006, Steven Wilton wrote:

We've been using a patch that allows NTLM auth to work through our 
proxies

for a while now.  The version we're using does depend on the tproxy patch
that we've also applied, and it essentially adds the client's ip address
and port to the pconn key when the server connection is spoofing the
client's ip address.  As a result of using the existing pconn code, we do
not handle the closing of the server connection any differently from any
other persistent connection failing.  This has not generated errors that 
I

have heard of from any client using our proxy servers, and we do
transparently proxy all our client access to web servers.

Having seen your patch, I've added the Proxy-Support: headers, and also
added a pinning flag to the request-flags struct to allow 
identification

of a pinned connection.  I've attached a modified version of the patch
we're using for comment, as it uses the existing persistent connection
methods and does not add any new sections of code that will terminate
connections (and this version will apply to the squid 2.5 tree without
needing the tproxy patch applied).

I've not looked into the http specs to see if I'm breaking any rules 
here,

but in practice we're not seeing problems with this style of connection
pinning.

Steven








Re: problems with the squid-2.5 connection pinning

2006-04-14 Thread Steven Wilton


- Original Message - 
From: Henrik Nordstrom [EMAIL PROTECTED]

To: Adrian Chadd [EMAIL PROTECTED]
Cc: squid-dev@squid-cache.org
Sent: Friday, April 14, 2006 5:32 PM
Subject: Re: problems with the squid-2.5 connection pinning


Was anything ever written to define/clarify the semantics of connection
pinning (at least for NTLM authentication) ? I couldn't find anything
with a quick browse with google defining the behaviour (so I could
see how the error condition should be handled.)

Let me know when you have something and I'll test it out.


If the server connection is gone we have little choice but to close the
client connection as well. This due to the client considering that
connection already authenticated and sending a new authentication
challenge on the same client connection would be considered by the
client as access denied for this user, ask the user if he has another
login which might be granted access to the requested object.


Is it really a problem if the client is sent a new auth challenge?  If the 
client connection is closed because the server went away, the client will 
most likely need to refresh the page, which will result in a new auth 
challenge being issued anyway.


If there are any other issues raised by keeping the client connection open, 
these other issues would be good reason to close the client connection.



We've been using a patch that allows NTLM auth to work through our proxies 
for a while now.  The version we're using does depend on the tproxy patch 
that we've also applied, and it essentially adds the client's ip address and 
port to the pconn key when the server connection is spoofing the client's ip 
address.  As a result of using the existing pconn code, we do not handle the 
closing of the server connection any differently from any other persistent 
connection failing.  This has not generated errors that I have heard of from 
any client using our proxy servers, and we do transparently proxy all our 
client access to web servers.


Having seen your patch, I've added the Proxy-Support: headers, and also 
added a pinning flag to the request-flags struct to allow identification 
of a pinned connection.  I've attached a modified version of the patch we're 
using for comment, as it uses the existing persistent connection methods and 
does not add any new sections of code that will terminate connections (and 
this version will apply to the squid 2.5 tree without needing the tproxy 
patch applied).


I've not looked into the http specs to see if I'm breaking any rules here, 
but in practice we're not seeing problems with this style of connection 
pinning.


Steven 


pinning.patch
Description: Binary data


RE: WCCPv2 support

2006-03-16 Thread Steven Wilton
I've just created the wccp2-s2_5 branch and uploaded a patch based on some
code that we've been using on our network for about 7 months now.  The code
supports multiple routers, multiple caches and multiple services.  It does
use a lot of structs to keep track of everything, which should allow
decoding of any WCCP2 packet received, and also allow extending to support
new features (ie MD5 authentication).

The code does have some big comments on ideas I've got for implementing
configuration options.  We are happy using the static configuration at the
moment, which is why I've not implemented the configuration code.

The wccp2 code has also been rolled into the wccp.c file, but it should be
easy to separate into a separate file if necessary.

Please feel free to modify the version I've uploaded, or even replace it
with the code that has been circulated on this list.

One main difference between my code and other versions is that I have kept a
wccp packet in memory attatched to each service (and a couple of extra
values stored under each service for each router).  This simplifies the
HERE_I_AM code to basically sending a copy of the stored packet.  All the
verification code and any packet updating is done when it receives the
I_SEE_YOU packet (if required). 

Steven  

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, 16 March 2006 2:39 AM
 To: Jeremy Hall
 Cc: Squid Developers
 Subject: Re: WCCPv2 support
 
 ons 2006-03-15 klockan 08:15 -0500 skrev Jeremy Hall:
 
  my username on sf.net is jthall
 
 CVS access on devel.squid-cache.org granted.
 
  
  Should I work on the existing wccpv2 branch or make a new one for my
  changes?
 
 I'd make a new one. The old one is private to visolve 
 according to our
 branch naming standard.
 
 Regards
 Henrik
 
 -- 
 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.1.385 / Virus Database: 268.2.3/281 - Release 
 Date: 14/03/2006
  
   
 

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.385 / Virus Database: 268.2.4/283 - Release Date: 16/03/2006
 



RE: Linux filesystem speed comparison

2005-04-11 Thread Steven Wilton
 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Monday, April 11, 2005 6:09 PM
 To: Steven Wilton
 Cc: 'Squid Developers'
 Subject: Re: Linux filesystem speed comparison
 
 On Mon, 11 Apr 2005, Steven Wilton wrote:
 
  We are running some large proxies in our Melbourne POP, and 
 we graph the CPU
  counters available in the 2.6 linux kernel to give us an 
 idea of what the
  CPU is doing.  We noticed that the CPU was spending large 
 amounts of time
  (around 60%) in an I/O wait state, which is when the CPU is 
 idle, but there
  are pending disk i/o opeartions.
 
 Which as such isn't that harmful to Squid (aufs/diskd) as 
 Squid continues 
 processing requests while there is pending I/O request. But 
 on the other 
 hand when the disk %util level approaches 100 you reach the 
 limit of what 
 the drive can sustain.

My thoughts were that if the numbers for %CPU in system and user were
similar, then a more efficient filesystem would arrange the data on disk
in such a way that the disk spends less time performing the operations.

I will add graphs for the /proc/diskstats value that records the amount of
time the disk is actually performing operations, and see how this compares
across the different filesystems (I looked at the iostat source to see how
it calculates the %util value).

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.9.6 - Release Date: 4/11/2005
 



RE: Linux filesystem speed comparison

2005-04-11 Thread Steven Wilton
  -Original Message-
 From: Joe Cooper [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, April 12, 2005 5:41 AM
 To: Steven Wilton
 Cc: 'Squid Developers'
 Subject: Re: Linux filesystem speed comparison
 
 The only test I know of that accurately predicts how a proxy will 
 perform when given real load is Polygraph.  And depending on the 
 hardware configuration, either ext2/ext3 or reiserfs will easily 
 outperform xfs.  In my experience, ReiserFS is a better performer 
 assuming CPU is not a bottleneck.  But it is a much heavier 
 user of CPU, 
 and so some test results (like Duane's extensive benchmarks 
 from a year 
 or more ago) show ext2/3 performing measurably better than 
 ReiserFS.  A 
 Polymix-4 test will fill the cache twice and then begin the 
 test...so it 
 takes into account the decline in performance that hits all 
 filesystems.
 
 It depends on the balance of hardware, but I'd be extremely 
 surprised if 
 XFS performs better than either reiser or ext2/3 for Squid 
 workloads on 
 /any/ system.  So I have to assume your methodology is 
 slightly flawed.
 ;-)

That's what I thought, but there has been a bit of XFS work in recent
kernels, and after my initial observations I was wondering if this has
improved the performance with squid's filesystem load.

 While I have found that ext3 (when configured correctly) has improved 
 performance for Squid quite a bit over ext2, it is still no match for 
 ReiserFS on our hardware, which always has more than enough 
 CPU for the 
 disk bandwidth available.  But, I can certainly imagine a hardware 
 configuration that would lead to ext3 performing better than ReiserFS 
 (especially since Duane has proven that it is possible by putting 6 
 10,000 RPM disks on a relatively wimpy CPU and testing the 
 configuration 
 extensively with polygraph).

The machines are a bit old (P3-500), but they've only got 3x 9Gb SCSI cache
disks, and they're not running anywhere near 100% load.

 I'm always interested in conflicting reports, however.  If 
 you've got a 
 case that makes XFS faster for Squid against polygraph, I'd 
 love to see 
 the results and configuration.

I had a quick look at polygraph before, but I didn't get very far in testing
it.  I would like to produce some polygraph figures for the proxies, so I
will see what I can do to make a test system.  My only concern is that the
proxies may be able to process requests faster than the polygraph hardware
can serve them.

From memory there are a lot of options available for polygraph, and I was
not sure how to produce meaningful results.  Any help would be appreciated.

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.9.6 - Release Date: 4/11/2005
 



Linux filesystem speed comparison

2005-04-10 Thread Steven Wilton
We are running some large proxies in our Melbourne POP, and we graph the CPU
counters available in the 2.6 linux kernel to give us an idea of what the
CPU is doing.  We noticed that the CPU was spending large amounts of time
(around 60%) in an I/O wait state, which is when the CPU is idle, but there
are pending disk i/o opeartions.

Some other recent tests have shown that on linux the aufs disk type gives us
the best performance, but I wanted to see if I could reduce the amount of
I/O wait time on the proxy servers by changing the filesystem.

In Perth we have 4 identical proxies (P3-500, 512Mb RAM, 3x9Gb cache disks,
linux 2.6.10 kernel, squid s2_5-epoll tree), which we were running with the
ext3 filesystem.  I reformatted 3 of them with reiserfs, xfs and jfs to see
what difference each of these filesystems would have on the I/O wait.  The
mount options for each are as follows:

/dev/sdb1 on /var/spool/squid/disk1 type reiserfs (rw,noatime,notail)
/dev/sdb1 on /var/spool/squid/disk1 type xfs (rw,noatime)
/dev/sdb1 on /var/spool/squid/disk1 type ext3 (rw,noatime,data=writeback)
/dev/sdb1 on /var/spool/squid/disk1 type jfs (rw,noatime)

Below is a single set of results from the daily averages of the graphs we
have.  I have taken 10 samples of 5 mijnute averages over the past week, and
they come up with similar figures (the 5 minute samples are pasted at the
end of this e-mail):

Filesystem  UserSys IO  Req/sec U/R S/R I/R
Reiser  7.6 8.4 14.128  0.270.170.50
Xfs 8.4 5.3 4.4 27.30.310.190.16
Ext37.6 4.4 10.428.20.270.160.15
Jfs 7.3 4.1 15.826.60.270.150.59


The numbers are as follows:
User- %CPU user
Sys - %CPU system
IO  - %CPU IO wait
Req/sec - Requests/sec for squid
U/R - User/(Req/sec)
S/R - Sys/(Req/sec)
I/R - IO /(Req/sec)

The interesting thing is that this test shows that in a 2.6.10 kernel, XFS
is the clear winner for I/O wait, followed by ext3 writeback.  I was not
surprised to see reiser come off worse than ext3, as I have previously tried
to use reiser on our proxies (on a 2.2 kernel), and noticed that initially
the proxy was a lot quicker, but as the disk filled up, the cache
performance dropped.

I thought I'd post this to squid-dev for comments first, as I have read
other posts that say that squid+reiser is the recommended combination, and
was wondering if there are other tests that I should perform.

Steven


The 

UserSys IO  Req/sec U/R S/R I/R
5/4 9:43am  
reiser  11.57.3 12.254  0.210.140.23

xfs 12.37.4 4.5 49.40.250.150.09

ext312.28.8 10  48  0.250.180.21

jfs 9   5.3 10.240.90.220.130.25

5/4 8:23pm

reiser  11.88   13  46.10.260.170.28

xfs 12.98   6.2 56.50.230.140.11

ext313.48.8 14.359.70.220.150.24

jfs 12.47.8 12.856.20.220.140.23

6/4 7:23am

reiser  4.3 2.2 5.1 21.20.200.100.24

xfs 5.2 2.6 1.2 24.30.210.110.05

ext34.1 2.3 5.1 13.90.290.170.37

jfs 5.9 2.7 4.7 17  0.350.160.28

6/4 10:47am

reiser  10.97.6 12.348.10.230.160.26

xfs 11.57.5 5.6 50.70.230.150.11

ext311.17.4 14.151  0.220.150.28

jfs 10.26.2 13.242.10.240.150.31

6/4 12:02pm

reiser  10.16.2 12.549.70.200.120.25

xfs 11.68.3 6.4 49  0.240.170.13

ext312.38.2 14.448.80.250.170.30

jfs 10.16.2 11.241.30.240.150.27

6/4 15:20pm

reiser  10.26.8 12.547.80.210.140.26

xfs 13.99.9 7.6 58.90.240.170.13

ext311.97.9 13.546.90.250.170.29

jfs 13.46   13.441.90.320.140.32

7/4 07:54am

reiser  7.8 4.7 10.734.80.220.140.31

xfs 8.4 5.6 4.7 26.90.310.210.17

ext37.5 5.2 10.229.40.260.180.35

jfs 6   3.7 9.3 24.80.240.150.38

7/4 1:44pm

reiser  12  8.5 19.755.30.220.150.36

xfs 

RE: Memory usage fix (patch)

2005-03-17 Thread Steven Wilton

 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Friday, March 18, 2005 4:56 AM
 To: Steven Wilton
 Cc: 'Squid Developers'
 Subject: RE: Memory usage fix (patch)
 
 On Thu, 17 Mar 2005, Steven Wilton wrote:
 
  The problem is that once squid starts hitting swap we start getting
  complaints.  We have also noticed that certain clients have 
 an unusual usage
  pattern that seems to cause squid to ue lots of memory, 
 obviously bypassing
  the checks in fwdCheckDefer.  I'll see if I can track this down.
 
 Either your use of the defer function is not working, or you 
 have clients 
 triggering the race condition I indicated yesterday when the original 
 client disconnects from the request.
 

You're correct, the defer function was not being used correctly.  The
problem was that enabling it caused a big increase in the CPU usage of
squid.  I think I've come up with an acceptable solution (CPU usage
increases with network i/o).  I'll update the epoll-2_5 tree with this
change.

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005
 



Intro

2005-03-16 Thread Steven Wilton
Hi,

I'm a network administrator for a national ISP in Australia, and we
currently use squid.  I've developed a patch for squid 2.5 to enable epoll
support under linux while waiting for squid 3.0 to be released, and wanted
to publish this.

My sourceforge account name is swsf

Regards

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005
 



Memory usage fix (patch)

2005-03-16 Thread Steven Wilton
I've seen a few queries over the years regarding squid's memory usage.  While 
working on the epoll support for squid, I found that one of the reasons that 
squid's memory usage would go from a stable 300MB to 600MB overnight for no 
apparent reason.

The problem is that uncacheable objects (ie size  maximum_object_size, or 
download managers doing multiple partial requests on large files) are always 
held in memory.  Squid does free this memory as the data is sent to the client, 
but it doesn't look like there's a backoff mechanism when the data is arriving 
at a much faster rate than it is being sent to the client.

The attatched patch fixed this problem with my epoll support, and I believe 
that it should also work under poll and select.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005
 
  
diff -urN squid-2.5.STABLE9.orig/src/http.c squid-2.5.STABLE9-epoll/src/http.c
--- squid-2.5.STABLE9.orig/src/http.c   Tue Mar 15 16:17:03 2005
+++ squid-2.5.STABLE9-epoll/src/http.c  Thu Mar 17 07:42:52 2005
@@ -579,7 +579,24 @@
comm_close(fd);
return;
 }
-/* check if we want to defer reading */
+/* check if we want to defer reading (this stops squid from using too much 
memory for in-transit objects */
+/* If the object has no swapout entry, and the memory foortprint is too 
big */
+if( (!(entry-mem_obj-swapout.sio))  ((entry-mem_obj-inmem_hi - 
entry-mem_obj-inmem_lo)  READ_AHEAD_GAP) ) {
+   /* Flush written data out of memory */
+   storeSwapOut(entry);
+
+   if((entry-mem_obj-inmem_hi - entry-mem_obj-inmem_lo)  
READ_AHEAD_GAP) {
+   /* Wait for more data or EOF condition */
+   if (httpState-flags.keepalive_broken) {
+   commSetTimeout(fd, 10, NULL, NULL);
+   } else {
+   commSetTimeout(fd, Config.Timeout.read, NULL, NULL);
+   }
+   commSetSelect(fd, COMM_SELECT_READ, httpReadReply, httpState, 0);
+   commSetDefer(fd, NULL, NULL);
+   return;
+   }
+}
 errno = 0;
 read_sz = SQUID_TCP_SO_RCVBUF;
 #if DELAY_POOLS


RE: Memory usage fix (patch)

2005-03-16 Thread Steven Wilton
 

 -Original Message-
 From: Steven Wilton [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, March 17, 2005 10:57 AM
 
  -Original Message-
  From: Henrik Nordstrom
  Sent: Thursday, March 17, 2005 9:12 AM
 
  On Thu, 17 Mar 2005, Steven Wilton wrote:
  
   The problem is that uncacheable objects (ie size  
  maximum_object_size, 
   or download managers doing multiple partial requests on 
  large files) are 
   always held in memory.  Squid does free this memory as the 
  data is sent 
   to the client, but it doesn't look like there's a backoff 
  mechanism when 
   the data is arriving at a much faster rate than it is being 
  sent to the 
   client.
  
  Normally this is dealt with by the fwdCheckDefer function. 
 Maybe your 
  epoll implementation does not use the filedescriptors defer 
  function to 
  back off when needed?
 
 I did make some changes to my original epoll patch, and the 
 epoll patch now
 work with the commSetDefer() function


Actually.. How is the fwdCheckDefer function meant to slow this down?  The
way I read the code is that it follows this logic:

httpReadReply
- check aborted  return
- read data from socket
- if length  0 and we have processed headers
  - storeAppend
  - switch httpPconnTransferDone (check whether the transfer is comlete)
 - if transfer not complete, queue fd for read with callback to
httpReadReply

So, if the headers have been procesed, and the read call returns data, we
queue another read without checking whether to defer.  

The patch that I submitted checks that a swap object is not first. If
entry-mem_obj-swapout.sio is not set, is it possible for another request
to be fetching from this object?  I was pretty sure that I had tested this
case, and found that without a sio, there was no reference for squid to
generate a cache hit.  (It was 7 months ago, and I could be mistaken.)

Having said this, the patch I originally posted did not seem to work well
with my testing (100% cpu usage with epoll).  I am going to do further work
on it.

Steven

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005