Re: [PATCH] StoreID stage 2\3 fully functional StoreID interface.

2012-12-06 Thread Eliezer Croitoru

Thanks Alex and Amos,

The Audit gave me a lot!
I hope to sit on it today and if not today later next week.
For now I am using this patch in a testing machine to make sure it does 
what needed for a reasonable period of time without bad side effects.


How long it is before committing a patch to trunk?(after all the fixes) 
it would need approval of the others and 10 more days?


Thanks,
Eliezer

On 12/6/2012 9:04 AM, Alex Rousskov wrote:

On 12/05/2012 06:20 PM, Amos Jeffries wrote:

  - for store_id it is not clear from the name what it is exactly.
Storage of StoreID for the specific cases that the request not avalile
leaves me just as much in the dark about what this variable holds.


Agreed, especially since this is a request member so request is
naturally available. This sounds like a comment cut-and-paste error of
some sorts because the patch always sets this data member after a
successful helper response.



   + Prefer something like: The ID string used internally by Squid to
uniquely de-duplicate this requests URL with other URLs stored objects.


I find uniquely de-duplicate too puzzling. If you want to briefly
define what store ID is here, how about this:

 If defined, store_id_program mapped requested URL to this ID.
 Store uses this ID (and not the URL) to find and store entries,
 avoiding caching duplicate entries when different URLs point to
 essentially the same cachable resource.

If you do not want to define the concept itself here, then the first
line would suffice IMO.


BTW, when you do not think the patch is ready for commit, please use
[PREVIEW] Subject prefix or no prefix at all. The [PATCH] prefix is for
patches that you think should be committed essentially without changes.


HTH,

Alex.



--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [PATCH] StoreID stage 2\3 fully functional StoreID interface.

2012-12-06 Thread Amos Jeffries

On 6/12/2012 9:03 p.m., Eliezer Croitoru wrote:

Thanks Alex and Amos,

The Audit gave me a lot!
I hope to sit on it today and if not today later next week.
For now I am using this patch in a testing machine to make sure it 
does what needed for a reasonable period of time without bad side 
effects.


How long it is before committing a patch to trunk?(after all the 
fixes) it would need approval of the others and 10 more days?


When you submit a patch which three separate people can agree (with a 
+1 vote) that there are no changes to be made. Submitting counts as 
one vote from you. The time to commit once agreement is reached is 
usually minutes or hours - getting the agreement is the long part for 
those like yourself still learning the coding styles required.


Amos



Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-06 Thread Amos Jeffries

On 5/12/2012 9:19 a.m., Alex Rousskov wrote:

On 12/01/2012 06:02 PM, Alex Rousskov wrote:

To summarize, our choices for a pinned connection created for the
current client are:

Before sending the request:

1. Reopen and repin used pconns to send unretriable requests,
just like non-pinned connection code does. This eliminates
HTTP pconn race conditions for unretriable requests. This is
what the proposed patch does for SslBump.

2. Send all requests without reopening the pinned connection.
If we encounter an HTTP pconn race condition, see below.


After the request, if an HTTP pconn race happens:

0. Send an error message to the client.
This is what we do today and it breaks things.

1. Reset the client connection.
Pretend to be a dumb tunnel.

2. Retry, just like the current non-pinned connection code does.
This is what the proposed patch implements.


Current code does 2+0. That does not work.

The proposed patch does 1+2. That works in my limited tests.

Henrik suggested 2+1. I agree that it is also an option worth
considering.

I am leaning towards implementing 2+1 _and_ removing the current
connection repinning code as not needed.

We could leave repinning code in case it is needed later to fix broken
clients, but no such clients are known at this time, while that code is
complex and requires fixes (the patch posted here earlier), so I am
leaning towards removing it completely for now. With some effort it can
be re-introduced at a later time, of course.

If there are any better ideas, please let me know.


Re-pinning is somewhat useful for the cases of:
 * Kerberos/Bearer auth where credentials on the request are sufficient 
to re-auth the new connection identically to the original one.
 * NAT (not TPROXY) intercepted connections to the same destination 
IP:port. Where verification can lock it down to that destination, but 
not completely.


in both cases provided the destination IP:port is the same as the 
original one. Were it not for the required client IP:port combo already 
being in-use this would also be true for TPROXY. And were it not for the 
handshake hassle this would also be true for NTLM.


The cases with problems are:
 * NTLM due to the handshake complexity which nobody has implemented 
yet (otherwise this would be in the useful category too with Kerberos)

 * TPROXY where the src+dst tuplet has collisions with the old connection
 * CONNECT where we don't know what the state in the data channel 
contains. Bumping should fit into this too if we are to emulate the 
CONNECT properly, even though we do know the state.



Amos


Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-06 Thread Alex Rousskov
On 12/06/2012 05:21 AM, Amos Jeffries wrote:
 On 5/12/2012 9:19 a.m., Alex Rousskov wrote:
 On 12/01/2012 06:02 PM, Alex Rousskov wrote:
 To summarize, our choices for a pinned connection created for the
 current client are:

 Before sending the request:

 1. Reopen and repin used pconns to send unretriable requests,
 just like non-pinned connection code does. This eliminates
 HTTP pconn race conditions for unretriable requests. This is
 what the proposed patch does for SslBump.

 2. Send all requests without reopening the pinned connection.
 If we encounter an HTTP pconn race condition, see below.


 After the request, if an HTTP pconn race happens:

 0. Send an error message to the client.
 This is what we do today and it breaks things.

 1. Reset the client connection.
 Pretend to be a dumb tunnel.

 2. Retry, just like the current non-pinned connection code does.
 This is what the proposed patch implements.


 Current code does 2+0. That does not work.

 The proposed patch does 1+2. That works in my limited tests.

 Henrik suggested 2+1. I agree that it is also an option worth
 considering.
 I am leaning towards implementing 2+1 _and_ removing the current
 connection repinning code as not needed.

 We could leave repinning code in case it is needed later to fix broken
 clients, but no such clients are known at this time, while that code is
 complex and requires fixes (the patch posted here earlier), so I am
 leaning towards removing it completely for now. With some effort it can
 be re-introduced at a later time, of course.

 If there are any better ideas, please let me know.
 
 Re-pinning is somewhat useful for the cases of:
  * Kerberos/Bearer auth where credentials on the request are sufficient
 to re-auth the new connection identically to the original one.
  * NAT (not TPROXY) intercepted connections to the same destination
 IP:port. Where verification can lock it down to that destination, but
 not completely.


Hi Amos,

Since repinning has been declared a bug in general, let's leave
developing support for these special cases for those who need them.
Those folks would be in a better position to advocate that their cases
need repinning _and_ properly detect safe repinning conditions, validate
repinned connections, etc. They would be able to reuse most of our
deleted code, of course, but some new code will be needed to take care
of their cases anyway.

The same applies to any other special cases for repinning that we are
not aware of right now. Hopefully, each repinning case will be reviewed
more carefully in the future to avoid removal.


Thank you,

Alex.



[BUG] (13) Permission denied with workers | squid 3.2.3 and 3.3 rev 12500.

2012-12-06 Thread Eliezer Croitoru

I am trying to work with workers and it seems to not bind the sockets.
Basic settings with build options:
Squid Cache: Version 3.HEAD-BZR
configure options:  '--prefix=/opt/squid33-12500' '--enable-delay-pools' 
'--enable-icap-client' '--disable-wccp' '--disable-wccpv2' 
'--enable-ssl' '--enable-linux-netfilter' '--disable-ipv6' 
'--disable-translation' '--disable-auto-locale' 
'--with-default-user=proxy' '--with-large-files' '--with-dl' 
'--enable-leakfinder' '--with-valgrind-debug' '--enable-ssl-crtd'


On cache log I am getting these:
2012/12/06 21:34:29.780 kid3| commBind: Cannot bind socket FD 10 to 
[::]: (13) Permission denied
2012/12/06 21:34:29.806 kid2| commBind: Cannot bind socket FD 20 to 
[::]: (13) Permission denied
2012/12/06 21:34:29.812 kid1| commBind: Cannot bind socket FD 20 to 
[::]: (13) Permission denied
2012/12/06 21:34:52.210 kid2| commBind: Cannot bind socket FD 20 to 
[::]: (13) Permission denied


And also rebuilding the cache etc.
config basically is:
workers 2 # more then 1
...
http_port 3127 intercept
http_port 3128
http_port 3129 tproxy
..

I was thinking of using debug section 5,9 to get some output but I am 
still not sure what and how.
Since there are more users with workers out there, they dont have this 
issue?


Linux localhost.localdomain 3.3.8-gentoo #4 SMP Fri Nov 2 15:04:14 IST 
2012 x86_64 Intel(R) Atom(TM) CPU D525 @ 1.80GHz GenuineIntel GNU/Linux


And also in a RedHat el6 64 bit with a xeon.
No selinux at all.

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il