Re: Blocker bzr problem on Windows

2008-04-21 Thread Guido Serassio

Hi Henrik,

Sorry for the delayed response  :-(

At 22:34 15/04/2008, Henrik Nordstrom wrote:

tis 2008-04-15 klockan 20:30 +0200 skrev Guido Serassio:

 I cannot waste my very limited time trying to fix the development
 tools that I should use . :-(

My proposal if you find that you have time to work on Squid-3, ignoring
the tools problem:

Create an NT branch in the devel CVS repository, and do your Windows
port update work there, using the tools you are used to. There is no
problem to mirror Squid-3.0 in the devel cvs repository if you need.
Then submit changes to trunk / 3.0 when suitable.

When you get to the point that Squid-3 runs properly on Windows and it's
time for a release then we can revisit the tools  branch problem..
hopefully by then bzr has got it's act together, and we also have a
clearer view of things..


This could be a way.
But before, I like to focus on Squid 2 before the 2.7 release, hoping 
that in the meantime the bzr people will fix the problem


I like to run some Windows in-depth testing on 2.7 and to add a 
Windows specific port of the domain resolv.conf directive, I should 
have some time during the incoming week-end: In Italy the 25 April is Holiday.


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: Blocker bzr problem on Windows

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 10:04 +0200 skrev Guido Serassio:
 But before, I like to focus on Squid 2 before the 2.7 release, hoping 
 that in the meantime the bzr people will fix the problem

As good plan as any. Porting 2.7 should be very straight forward from
2.6.

 I like to run some Windows in-depth testing on 2.7 and to add a 
 Windows specific port of the domain resolv.conf directive, I should 
 have some time during the incoming week-end: In Italy the 25 April is Holiday.

That would be great. I am not in a rush to get 2.7 out. Squid-2 work at
the moment is focused on finalizing 2.6 so it then can be left at rest
when 2.7.STABLE1 has been released..

Regards
Henrik



Re: client_side and comm_close

2008-04-21 Thread Amos Jeffries

Alex Rousskov wrote:

On Mon, 2008-04-21 at 15:48 +1200, Amos Jeffries wrote:

comm_close(fd) API:

1) No I/O callbacks will be dialed after comm_close is called (including
the time when comm_close is running).


Sounds good.


2) All close callbacks registered before comm_close was called will be
called asynchronously, sometime after comm_close exits.


Sound good.


3) The code can ask Comm whether a FD associated with a close callback
has been closed. The answer will be yes after comm_close is called and
no before that. This interface needs to be added. Direct access to
fd_table[fd].flags will not work because the same FD could have been
assigned to another connection already. The code will have to use its
close callback to determine FD status.

Sounds good, BUT, direct access to fd_table pieces may need to be blocked
entirely (private:) so code is forced to go through the Comm API properly.


Yes, except if we want to avoid modifying old code that can still access
those flags directly because it gets immediate close callbacks (more on
that below).


(2) states that the higher-level close callbacks may be run at any time.
ie after the callback (3) refers to is run. This leaves a window open for
disaster, unless the closing callbacks are made immediate, and back we go
to recursion...


Those are the same close callbacks! There are no low-level and
high-level close callbacks here. The external code can use its stored
callback pointer to get FD status even after that close callback has
been called. There is no problem with that. The callback will not look
at fd_table in that case, it will just say yes, the fd is closed as far
as you should be concerned.

And, per recent suggestions, old code will get immediate close callbacks
so it does not need to be modified to use the close callback pointer to
ask about FD status.


Alright. I got to this branch of the thread before the other which makes 
things clearer. Same for the below.





4) Registering any callback after comm_close has been called is wrong.
Comm will assert if it can detect such late registrations. Not all late
registrations will be detected because the same FD can be already
assigned to a different (new) connection[*].

That non-detection seems to me to be a worry. The same problem in (3)
occurs here. (4) can guarantee that the closing callbacks don't play nasty
re-registrations. But only if they are called immediate instead of
scheduled.


Sorry, I do not understand. The late registration of a close callback
can come from anywhere. The old code relies on fd only. There is no way
for comm to distinguish whether the caller is using a FD from a closed
connection or a new one. This problem exists in the old code as well so
there is no new danger here! This problem can only be solved when we
have comm handlers of one sort or another.


The above comm_close API is easy to implement without massive code
changes. Do you think it is the right API? If not, what should be
changed?

Apart from the worry with immediate vs delayed closing callbacks.

To reduce that worry somewhat I think, the callbacks which actually use
ERR_COMM_CLOSING for anything other than immediate abort will need to be
replaced with two; a normal callback that checks the FD is open, and a
simple closing callback.


I am confused by the normal callback that checks the FD is open part.
What is that for?  Are you talking about some internal comm calls to
close the FD? I believe that handler code that currently does not ignore
ERR_COMM_CLOSING notifications will need to be moved into the close
handler code (because only the close handler will be called if we
eliminate ERR_COMM_CLOSING).


Nevermind. I was saying it wrong, but meaning what you have re-stated.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: cvs commit: squid/src dns_internal.c

2008-04-21 Thread Guido Serassio

Hi Henrik,

At 02:41 16/04/2008, Henrik Nordstrom wrote:

hno 2008/04/15 18:41:41 MDT

  Modified files:
src  dns_internal.c
  Log:
  Add support for the resolv.conf domain directive, and also 
automatically derived default domain


  this patch adds the domain resolv.conf directive, similar to search but
  only accepting a single domain.

  In addition it adds support for automatically deriving the domain from
  the fully qualified hostname.


What happens when both search and domain keywords are specified into 
resolv.conf ?


It seems to me that the last parsed overwrites the domain search list.

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: client_side and comm_close

2008-04-21 Thread Amos Jeffries

Alex Rousskov wrote:

On Mon, 2008-04-21 at 16:02 +1200, Amos Jeffries wrote:

On Sun, 2008-04-20 at 22:01 +0300, Tsantilas Christos wrote:

Maybe it can be easier:

The ease of implementation is a separate question. We still have to
agree what we are implementing, and the items below attempt to define
that. Ambiguity and lack of clear APIs is quite dangerous here (as the
related bugs illustrate!).


Just keeping the comm_close as is and in AsyncCallsQueue just
cancels/not execute asyncCall's for which the fd_table[fd].flags.closing
is set.

Implementing #1 below can indeed be as simple as you sketch above. We
need to make sure that the final call that closes the FD and makes fd
value available to other connections is placed last (that may already be
true and is easy to fix if not).

Not true.

IMO, The proposed API _very_ first two lines of code in comm_close are to
register a special Comm callback to perform the fclose() call, and then to
immediately set fd_table flag closed for the rest of the comm_close
process.


Agreed on the flag, disagreed on the call. The special/internal Comm
call (to self) should be scheduled last (after all close callbacks) and
not first because the close handler might need access to some FD-related
info. That info should be preserved until all close handlers have been
called.


With that condition at the start we can guarantee that any registrations
made during the close sequence are either non-FD-relevant or caught.


Yes, the flag is sufficient for that. The internal close for good call
can still be last.


I was thinking the close-for-good call would get caught as an fd 
operation on closed fd by the Async stuff if set after the flag.





The special Comm callback is only special in that it is not required to
check flags open before fclose()'ing the system-level FD, which will allow
new connections to open on the FD.


It is special because it is calling an internal comm method not some
external close handler. Its profile and parameters are different.  I
would not call it a callback because of that, but the label is not
important. It is not a close callback in terms of
comm_add_close_handler.


Yet it seems to me it needs to be run as an async after the other async 
close-handlers are run. Due to the non-determinence of the async timing.



Between the initial comm_close() call and the special Comm callback, we
don't need to care if callbacks write their shutdown statements to the fd
(its still technically open) but the closed flag prevents comm accepting
any new delayed event registrations or reads.


Exactly. Our only problem is with code that calls comm with a stale fd,
after that fd has been really closed and a new connection was opened
with the same fd. That's not a new problem and we will solve it in v3.2
using comm handlers.

I hope the above puts us on the same page about the implementation
sketch for the comm_close API, but please yell if something still seems
broken.


Okay. It looks the way I would implement it from scratch.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: client_side and comm_close

2008-04-21 Thread Alex Rousskov
On Mon, 2008-04-21 at 23:45 +1200, Amos Jeffries wrote:
  IMO, The proposed API _very_ first two lines of code in comm_close are to
  register a special Comm callback to perform the fclose() call, and then to
  immediately set fd_table flag closed for the rest of the comm_close
  process.
  
  Agreed on the flag, disagreed on the call. The special/internal Comm
  call (to self) should be scheduled last (after all close callbacks) and
  not first because the close handler might need access to some FD-related
  info. That info should be preserved until all close handlers have been
  called.
  
  With that condition at the start we can guarantee that any registrations
  made during the close sequence are either non-FD-relevant or caught.
  
  Yes, the flag is sufficient for that. The internal close for good call
  can still be last.
 
 I was thinking the close-for-good call would get caught as an fd 
 operation on closed fd by the Async stuff if set after the flag.

Actually, the internal close-for-good call handler asserts that the flag
is set! This internal handler does not have the structure or the
restrictions of the public comm_* functions...

  The special Comm callback is only special in that it is not required to
  check flags open before fclose()'ing the system-level FD, which will allow
  new connections to open on the FD.
  
  It is special because it is calling an internal comm method not some
  external close handler. Its profile and parameters are different.  I
  would not call it a callback because of that, but the label is not
  important. It is not a close callback in terms of
  comm_add_close_handler.
 
 Yet it seems to me it needs to be run as an async after the other async 
 close-handlers are run. Due to the non-determinence of the async timing.

Yes, the it must called asynchronously. The call is scheduled after all
close callbacks are scheduled so that it will be dialed last.

Alex.




Re: cvs commit: squid/src dns_internal.c

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 12:57 +0200 skrev Guido Serassio:
 What happens when both search and domain keywords are specified into 
 resolv.conf ?

The last one is used.

 It seems to me that the last parsed overwrites the domain search list.

Yes. As it does in the glibc resolver..

Regards
Henrik



Re: cvs commit: squid/src dns_internal.c

2008-04-21 Thread Guido Serassio

Hi Henrik,

At 16:16 21/04/2008, Henrik Nordstrom wrote:

mån 2008-04-21 klockan 23:28 +1200 skrev Amos Jeffries:

 'tis supposed to prefix the existing search list

Not in my tests, using Linux GLIBC as test platform to compare with.

search replaces domain, and opposite.

domain takes a single domain, search a list.

multiple search statements overwrite replace the earlier.


But there is some standard defined about ?
This could be a GLIBC bug 

On Windows the search list always overrides the machine domain.

What should be the correct behaviour ?

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: cvs commit: squid/src dns_internal.c

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 16:27 +0200 skrev Guido Serassio:
 But there is some standard defined about ?

Doubtful. resolv.conf is not specified in SUS, so the closest to
standard for resolv.conf is the bind implementation I think.

 This could be a GLIBC bug 

Could be. Not sure domain is meant to replace search if specified
after.. but it's at least a well documented one

   The domain and search keywords are mutually exclusive.   If  more  than
   one instance of these keywords is present, the last instance wins.

Documented on both Linux glibc and FreeBSD libc.

 On Windows the search list always overrides the machine domain.

Isn't these parameters in the registry on Windows which means no
ordering issues?

 What should be the correct behaviour ?

I think the comment above specifies the correct behavior, the admin
SHOULD only specify one of domain/search and at most once. If both is
specified, or specified multiple times the result may be implementation
dependent.

Regards
Henrik



Re: Long response header problem

2008-04-21 Thread Axel Westerhold
Ok,

Did some additional checks,

It should be 


 --- src/http.cc 2008-04-01 13:54:38.0 +0200
 +++ src/http.cc 2008-04-21 16:42:19.0 +0200
 @@ -75,7 +75,7 @@
  surrogateNoStore = false;
  fd = fwd-server_fd;
  readBuf = new MemBuf;
 -readBuf-init(4096, SQUID_TCP_SO_RCVBUF);
 +readBuf-init( SQUID_TCP_SO_RCVBUF, SQUID_TCP_SO_RCVBUF);
  orig_request = HTTPMSGLOCK(fwd-request);
 
  if (fwd-servers)

Which is btw. the coding used in ICAP where SQUID_TCP_SO_RCVBUF = 16384

Regards,
Axel


 Hi there,
 
 I ran, or better a customer ran into a problem today which sounded like this
 bug.
 
 http://www.squid-cache.org/bugs/show_bug.cgi?id=2001
 
 
 So I applied the attached patch to squid-3.0.STABLE4 and did a quick test.
 Still the same problem. By cache.log looks like this
 
 
 ...
 comm_read_try: FD 16, size 4094, retval 2896, errno 0
 ...
 HttpMsg::parse: failed to find end of headers (eof: 0)
 ...
 http.cc(1050) needs more at 2896
 http.cc(1206) may read up to 1199 bytes from FD 16
 ...
 comm_select(): got FD 16 events=1 monitoring=19 F-read_handler=1
 F-write_handler=0
 comm_select(): Calling read handler on FD 16
 comm_read_try: FD 16, size 1198, retval 1198, errno 0
 ...
 HttpMsg::parse: failed to find end of headers (eof: 0)
 ...
 http.cc(1050) needs more at 4094
 http.cc(1206) may read up to 1 bytes from FD 16
 ...
 comm_select(): got FD 16 events=1 monitoring=19 F-read_handler=0
 F-write_handler=0
 comm_select(): no read handler for FD 16
 
 and so on and so on. So I checked the coding in http.cc and changed it as
 follows.
 
 --- src/http.cc 2008-04-01 13:54:38.0 +0200
 +++ src/http.cc 2008-04-21 16:42:19.0 +0200
 @@ -75,7 +75,7 @@
  surrogateNoStore = false;
  fd = fwd-server_fd;
  readBuf = new MemBuf;
 -readBuf-init(4096, SQUID_TCP_SO_RCVBUF);
 +readBuf-init(16384, SQUID_TCP_SO_RCVBUF);
  orig_request = HTTPMSGLOCK(fwd-request);
 
  if (fwd-servers)
 
 
 
 Now it works but I am not sure if a.) this is a good solution and b.) a stable
 one :-).
 
 Maybe someone with more knowledge can do a check.
 
 Regards,
 Axel Westerhold
 DTS Systeme GmbH
 



Long response header problem

2008-04-21 Thread Axel Westerhold
Hi there,

I ran, or better a customer ran into a problem today which sounded like this
bug.

http://www.squid-cache.org/bugs/show_bug.cgi?id=2001


So I applied the attached patch to squid-3.0.STABLE4 and did a quick test.
Still the same problem. By cache.log looks like this


...
comm_read_try: FD 16, size 4094, retval 2896, errno 0
...
HttpMsg::parse: failed to find end of headers (eof: 0)
...
http.cc(1050) needs more at 2896
http.cc(1206) may read up to 1199 bytes from FD 16
...
comm_select(): got FD 16 events=1 monitoring=19 F-read_handler=1
F-write_handler=0
comm_select(): Calling read handler on FD 16
comm_read_try: FD 16, size 1198, retval 1198, errno 0
...
HttpMsg::parse: failed to find end of headers (eof: 0)
...
http.cc(1050) needs more at 4094
http.cc(1206) may read up to 1 bytes from FD 16
...
comm_select(): got FD 16 events=1 monitoring=19 F-read_handler=0
F-write_handler=0
comm_select(): no read handler for FD 16

and so on and so on. So I checked the coding in http.cc and changed it as
follows.

--- src/http.cc 2008-04-01 13:54:38.0 +0200
+++ src/http.cc 2008-04-21 16:42:19.0 +0200
@@ -75,7 +75,7 @@
 surrogateNoStore = false;
 fd = fwd-server_fd;
 readBuf = new MemBuf;
-readBuf-init(4096, SQUID_TCP_SO_RCVBUF);
+readBuf-init(16384, SQUID_TCP_SO_RCVBUF);
 orig_request = HTTPMSGLOCK(fwd-request);

 if (fwd-servers)



Now it works but I am not sure if a.) this is a good solution and b.) a
stable one :-).

Maybe someone with more knowledge can do a check.

Regards,
Axel Westerhold
DTS Systeme GmbH




Re: Long response header problem

2008-04-21 Thread Axel Westerhold
And one more. It might be this patch which solved the issue

--- src/http.cc 2008-04-01 13:54:38.0 +0200
+++ src/http.cc 2008-04-21 19:11:47.0 +0200
@@ -1200,7 +1200,7 @@
 void
 HttpStateData::maybeReadVirginBody()
 {
-int read_sz = replyBodySpace(readBuf-spaceSize());
+int read_sz = replyBodySpace(readBuf-potentialSpaceSize());

 debugs(11,9, HERE  (flags.do_next_read ? may : wont) 
 read up to   read_sz   bytes from FD   fd);


spaceSize will only return the size left from initial size. This will result
in read_sz2 and return some lines down in http.cc.

PotentialSpaceSize will return max_capacity - terminatedSize which seems
more logical.

Regards,
Axel Westerhold
DTS Systeme GmbH


 Ok,
 
 Did some additional checks,
 
 It should be 
 
 
  --- src/http.cc 2008-04-01 13:54:38.0 +0200
  +++ src/http.cc 2008-04-21 16:42:19.0 +0200
  @@ -75,7 +75,7 @@
   surrogateNoStore = false;
   fd = fwd-server_fd;
   readBuf = new MemBuf;
  -readBuf-init(4096, SQUID_TCP_SO_RCVBUF);
  +readBuf-init( SQUID_TCP_SO_RCVBUF, SQUID_TCP_SO_RCVBUF);
   orig_request = HTTPMSGLOCK(fwd-request);
  
   if (fwd-servers)
 
 Which is btw. the coding used in ICAP where SQUID_TCP_SO_RCVBUF = 16384
 
 Regards,
 Axel
 
 
 Hi there,
 
 I ran, or better a customer ran into a problem today which sounded like this
 bug.
 
 http://www.squid-cache.org/bugs/show_bug.cgi?id=2001
 
 
 So I applied the attached patch to squid-3.0.STABLE4 and did a quick test.
 Still the same problem. By cache.log looks like this
 
 
 ...
 comm_read_try: FD 16, size 4094, retval 2896, errno 0
 ...
 HttpMsg::parse: failed to find end of headers (eof: 0)
 ...
 http.cc(1050) needs more at 2896
 http.cc(1206) may read up to 1199 bytes from FD 16
 ...
 comm_select(): got FD 16 events=1 monitoring=19 F-read_handler=1
 F-write_handler=0
 comm_select(): Calling read handler on FD 16
 comm_read_try: FD 16, size 1198, retval 1198, errno 0
 ...
 HttpMsg::parse: failed to find end of headers (eof: 0)
 ...
 http.cc(1050) needs more at 4094
 http.cc(1206) may read up to 1 bytes from FD 16
 ...
 comm_select(): got FD 16 events=1 monitoring=19 F-read_handler=0
 F-write_handler=0
 comm_select(): no read handler for FD 16
 
 and so on and so on. So I checked the coding in http.cc and changed it as
 follows.
 
 --- src/http.cc 2008-04-01 13:54:38.0 +0200
 +++ src/http.cc 2008-04-21 16:42:19.0 +0200
 @@ -75,7 +75,7 @@
  surrogateNoStore = false;
  fd = fwd-server_fd;
  readBuf = new MemBuf;
 -readBuf-init(4096, SQUID_TCP_SO_RCVBUF);
 +readBuf-init(16384, SQUID_TCP_SO_RCVBUF);
  orig_request = HTTPMSGLOCK(fwd-request);
 
  if (fwd-servers)
 
 
 
 Now it works but I am not sure if a.) this is a good solution and b.) a
 stable
 one :-).
 
 Maybe someone with more knowledge can do a check.
 
 Regards,
 Axel Westerhold
 DTS Systeme GmbH
 
 



squid DTD project question

2008-04-21 Thread Darius BUFNEA


Dear Sirs,

I want to know what's the status of the Squid Duplicate Transfer Detection 
project. A recent analysis I have performed on our department proxy 
server's cache shows that around 12% of the cache objects are duplicates, 
and these duplicate objects occupy around 10% of the total cache space. On 
the squid devel web site the last news about the project is almost 5 years 
old, and my measurements show that the project deserve to be continued. 
Since I am not able to download the DSA and DTD patches from the squid 
devel web site, would you be so kind to inform me about the current status 
of the project and if there are any available related patches for squid 
(DSA and DTD)?


Best regards,

Darius Bufnea
Department of Computer Science
``Babes-Bolyai'' University of Cluj-Napoca, Romania




Re: Long response header problem

2008-04-21 Thread Axel Westerhold

 Hmm.. can't seem to reproduce this.
 
 The proposed change do not fix the problem, just hides it a bit.

See my last mail of three (:-) sorry Not my best day)



 
 The 3.0.STABLE4 code already bumps the read size to 1KB minimum when
 headers haven't been successfully parsed yet. See
 HttpStateData::maybeReadVirginBody()
 
 Do you have an example URL triggering the problem?

Yes and no. The Url is including a authentication dialog I can't give you
the username and password for. I'll check if I can come up with something
similar.  

 
 Are you using ICAP?

ICAP is off for this test.

 
 Any other interesting details about your configuration?

Nothing special. Actually the bug showed up on STABLE1 and I tested with a
STABLE4 without modifications (failed) patched with the longresp patch
(failed).

 
 Regards
 Henrik
 

As said, see my third mail:

SNIP

--- src/http.cc 2008-04-01 13:54:38.0 +0200
+++ src/http.cc 2008-04-21 19:11:47.0 +0200
@@ -1200,7 +1200,7 @@
 void
 HttpStateData::maybeReadVirginBody()
 {
-int read_sz = replyBodySpace(readBuf-spaceSize());
+int read_sz = replyBodySpace(readBuf-potentialSpaceSize());

 debugs(11,9, HERE  (flags.do_next_read ? may : wont) 
 read up to   read_sz   bytes from FD   fd);


spaceSize will only return the size left from initial size. This will result
in read_sz2 and return some lines down in http.cc.

PotentialSpaceSize will return max_capacity - terminatedSize which seems
more logical.


---SNIP


Regards,
Axel



Re: Long response header problem

2008-04-21 Thread Henrik Nordstrom
mån 2008-04-21 klockan 21:35 +0200 skrev Axel Westerhold:

 --- src/http.cc 2008-04-01 13:54:38.0 +0200
 +++ src/http.cc 2008-04-21 19:11:47.0 +0200
 @@ -1200,7 +1200,7 @@
  void
  HttpStateData::maybeReadVirginBody()
  {
 -int read_sz = replyBodySpace(readBuf-spaceSize());
 +int read_sz = replyBodySpace(readBuf-potentialSpaceSize());
 
  debugs(11,9, HERE  (flags.do_next_read ? may : wont) 
  read up to   read_sz   bytes from FD   fd);
 

Ok, that's a quite different change. But still not right. See below.


 spaceSize will only return the size left from initial size. This will result
 in read_sz2 and return some lines down in http.cc.

 PotentialSpaceSize will return max_capacity - terminatedSize which seems
 more logical.

No it's not. We do not want this buffer to grow unless absoultely
needed. The upper limit on buffer size is just a safe guard to make sure
something realize when things run completely out of bound.

Regarding how it handles long headers, look a few lines down... it only
returns there if the header has been parsed. If the header has not yet
been parsed it allows the buffer to grow by reading at least 1024 octets
more..

if (read_sz  2) {
if (flags.headers_parsed)
return;
else
read_sz = 1024;
}


But there is one cosmetic problem here in that we log the expected read
size before adjustment, with the adjustment being silent in debug logs..

Regards
Henrik



Re: Long response header problem

2008-04-21 Thread Amos Jeffries

Axel Westerhold wrote:

mån 2008-04-21 klockan 21:35 +0200 skrev Axel Westerhold:


--- src/http.cc 2008-04-01 13:54:38.0 +0200
+++ src/http.cc 2008-04-21 19:11:47.0 +0200
@@ -1200,7 +1200,7 @@
 void
 HttpStateData::maybeReadVirginBody()
 {
-int read_sz = replyBodySpace(readBuf-spaceSize());
+int read_sz = replyBodySpace(readBuf-potentialSpaceSize());

 debugs(11,9, HERE  (flags.do_next_read ? may : wont) 
 read up to   read_sz   bytes from FD   fd);


Ok, that's a quite different change. But still not right. See below.



spaceSize will only return the size left from initial size. This will result
in read_sz2 and return some lines down in http.cc.

PotentialSpaceSize will return max_capacity - terminatedSize which seems
more logical.

No it's not. We do not want this buffer to grow unless absoultely
needed. The upper limit on buffer size is just a safe guard to make sure
something realize when things run completely out of bound.

Regarding how it handles long headers, look a few lines down... it only
returns there if the header has been parsed. If the header has not yet
been parsed it allows the buffer to grow by reading at least 1024 octets
more..

if (read_sz  2) {
if (flags.headers_parsed)
return;
else
read_sz = 1024;
}


But there is one cosmetic problem here in that we log the expected read
size before adjustment, with the adjustment being silent in debug logs..

Regards
Henrik


Uhmmm,

See my mybeReadVirginBody() from Stable4

Any chance that you re using CVS ?


Do you mean you want CVS access?
We use Bazaar for Squid-3. Details:
  http://wiki.squid-cache.org/Squid3VCS

Or rsync has the latest patched source:
  rsync -avz rsync://squid-cache.org/source/squid-3.0


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: Feature Comparison Map

2008-04-21 Thread Amos Jeffries

Mark Nottingham wrote:

HTCP?



Thanks.
There are a lot of others missing too. IIRC the original list was 
incomplete at two pages length.


Amos


On 19/04/2008, at 9:12 PM, Amos Jeffries wrote:
You may recall I built a Feature comparison map a while back after 
some users requests for one.


As a follow up from those discussions, I have finally created the wiki 
page for it:

 http://wiki.squid-cache.org/FeatureComparison

Though in the meantime I seem to have lost my local copy of the 
original (quite long) list of features we worked out. Since its in the 
wiki, please add the feature you know of. Just note the style of 
breaking the long list in to sub-sections on type of feature and 
white-spacing so its easier to edit.


When its looking comprehensive I'd like to link it to the RoadMaps.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


--
Mark Nottingham   [EMAIL PROTECTED]





--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


3.1 Feature Submission Closure

2008-04-21 Thread Amos Jeffries
We are now three weeks past the official published feature submission 
deadline. With only a handful of new prospects.


I'd like to freeze new features for 3.1 at the current TODO list plus 
those non-timelined items actively under development which can be 
timelined by April 30th.

http://wiki.squid-cache.org/RoadMap/Squid3

Features which MAY still make it in if Alexey and Marin who are working 
on them (cc'd) can timeline them by 30th April:


  ACL Namespaces
  Quality of Service (ZPH patch for Squid-3)

I'd also like a status update on the remaining TODO list items please. 
Particulary the CppCodeFormat, due in a month ago.



Duane, Henrik: are we able to make 3.1.DEVEL0 the first weekend of May?


Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


Re: ZPH patches for Squid

2008-04-21 Thread Amos Jeffries
If anyone has any info that might help Marin could you mail it to me or 
him please.


Marin Stavrev wrote:

Hello,

I've practically finished the patch porting for the Squid source code 
with one minor glitch that I'll need more time to figure out myself, or 
use your help about it:


I'm also maintaining a kernel patch that stores the incoming TOS field 
when an outgoing (client) TCP connection is opened. I'm using this 
information via a getsockopt call to forward the original TOS field of a 
MISS reply towards squid's client (the TOS preserving feature of ZPH). 
For this purpose I need to have the socket handle of the upstream 
connection (the one to the remote server). So far I have not figured 
out, if it is available inside the cleintReplyContext::doGetMoreData() 
method (when the case is a MISS of course).


If you are aware of an easy reference to the remote server's connection 
handle I can finish the patch by the end of this week (I have a regular 
job that does not allow me to devote much time for other activities).


Best regards
M. Stavrev



Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4