Summery of discussion about StoreUrl coming.

2012-12-01 Thread Eliezer Croitoru

Thanks to alex and amos.

As we were talking about in the IRC channel it seems like the last 
blockage for the store url feature kind of cleared up and I started coding.


Worth mentioning:
- the blockage was url mismatch that came up since I choose to not 
change the original url that was stored in object meta data.
It seems like there is no really a reason to save it for store_url 
modified requests since it wont match anyway and the url+headers that is 
being used for revalidation(304) is taken from the client request.
* to solve it I will use the StoreEntry-mem-obj-originalUrl as 
StoreEntry-mem-obj-storeId which makes more sense for the use of it.


This decision will affect in couple ways about the debugs section look 
when feature is on and also on specific features in squid.


Unlike the old storeUrl feature that the refresh_pattern was applying on 
the request url with this change the refresh_pattern will affect only 
and only on the storeId which in regular cases will be the original url 
and on a case of a change of the url will always be the modified one.


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Eliezer Croitoru
- For now I am not changing the mem-obj-url and leave the branch as it 
is since there is no need for it now.

Once the feature will be fine The change will be placed.

- The patch adds the helper basic surrounding code and a bit action code 
to make sure it is not breaking anything.


- The point that changes everything is at the StoreEntry creation which 
uses the StoreUrl and not the originalUrl.


It makes everything simpler since there is not one check that actually 
should be done to validate the url else then what already in squid by 
store code.


- There is a change in how helpers are started and shutdown to support 
multiple helpers.


Responses are more then welcome.

Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il
=== modified file 'src/ClientRequestContext.h'
--- src/ClientRequestContext.h  2012-11-30 11:08:47 +
+++ src/ClientRequestContext.h  2012-12-01 11:33:41 +
@@ -34,7 +34,9 @@
 void clientAccessCheck2();
 void clientAccessCheckDone(const allow_t answer);
 void clientRedirectStart();
+void clientStoreUrlStart();
 void clientRedirectDone(const HelperReply reply);
+void clientStoreUrlDone(const HelperReply reply);
 void checkNoCache();
 void checkNoCacheDone(const allow_t answer);
 #if USE_ADAPTATION
@@ -55,6 +57,7 @@
 ClientHttpRequest *http;
 ACLChecklist *acl_checklist;/* need ptr back so we can unreg if 
needed */
 int redirect_state;
+int storeurl_state;
 
 /**
  * URL-rewrite/redirect helper may return BH for internal errors.
@@ -63,6 +66,7 @@
  * This tracks the number of previous failures for the current context.
  */
 uint8_t redirect_fail_count;
+uint8_t storeurl_fail_count;
 
 bool host_header_verify_done;
 bool http_access_done;
@@ -71,6 +75,7 @@
 bool adaptation_acl_check_done;
 #endif
 bool redirect_done;
+bool storeurl_rewrite_done;
 bool no_cache_done;
 bool interpreted_req_hdrs;
 bool tosToClientDone;

=== modified file 'src/HttpRequest.h'
--- src/HttpRequest.h   2012-11-30 22:57:54 +
+++ src/HttpRequest.h   2012-12-01 12:26:32 +
@@ -165,6 +165,8 @@
 
 char *canonical;
 
+char *store_url;
+
 RequestFlags flags;
 
 HttpHdrRange *range;

=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h   2012-10-29 04:59:58 +
+++ src/SquidConfig.h   2012-12-01 11:10:48 +
@@ -203,6 +203,7 @@
 #endif
 
 wordlist *redirect;
+wordlist *store_url;
 #if USE_UNLINKD
 
 char *unlinkd;
@@ -220,6 +221,7 @@
 #endif
 
 HelperChildConfig redirectChildren;
+HelperChildConfig storeUrlChildren;
 time_t authenticateGCInterval;
 time_t authenticateTTL;
 time_t authenticateIpTTL;
@@ -318,6 +320,7 @@
 int nonhierarchical_direct;
 int strip_query_terms;
 int redirector_bypass;
+int store_url_bypass;
 int ignore_unknown_nameservers;
 int client_pconns;
 int server_pconns;
@@ -381,6 +384,7 @@
 acl_access *brokenPosts;
 #endif
 acl_access *redirector;
+acl_access *store_url;
 acl_access *reply;
 AclAddress *outgoing_address;
 #if USE_HTCP

=== modified file 'src/URL.h'
--- src/URL.h   2012-09-21 14:57:30 +
+++ src/URL.h   2012-12-01 12:43:43 +
@@ -83,6 +83,7 @@
 void urlInitialize(void);
 HttpRequest *urlParse(const HttpRequestMethod, char *, HttpRequest *request = 
NULL);
 const char *urlCanonical(HttpRequest *);
+const char *urlStoreUrl(HttpRequest *);
 char *urlCanonicalClean(const HttpRequest *);
 const char *urlCanonicalFakeHttps(const HttpRequest * request);
 bool urlIsRelative(const char *);

=== modified file 'src/client_side_reply.cc'
--- src/client_side_reply.cc2012-11-12 02:25:12 +
+++ src/client_side_reply.cc2012-12-01 12:51:21 +
@@ -797,7 +797,10 @@
 
 for (HttpRequestMethod m(Http::METHOD_NONE); m != Http::METHOD_ENUM_END; 
++m) {
 if (m.respMaybeCacheable()) {
-if (StoreEntry *entry = storeGetPublic(url, m)) {
+/* TODO: fix things to allow store_url requests to be deleted.
+ *   is it the place?
+ */
+   if (StoreEntry *entry = storeGetPublic(url, m)) {
 debugs(88, 5, purging   RequestMethodStr(m)  ' '  url);
 #if USE_HTCP
 neighborsHtcpClear(entry, url, req, m, HTCP_CLR_INVALIDATION);
@@ -2160,7 +2163,15 @@
 if (http-request == NULL)
 http-request = HTTPMSGLOCK(new HttpRequest(m, AnyP::PROTO_NONE, 
null_string));
 
-StoreEntry *e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+StoreEntry *e;
+if (http-request-store_url){
+   assert(http-request-store_url);
+   e = storeCreateEntry(http-request-store_url, http-log_uri, reqFlags, 
m);
+}
+else{
+   e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+}
+
 
 sc = 

Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-01 Thread Henrik Nordström
fre 2012-11-30 klockan 23:07 -0700 skrev Alex Rousskov:
 I am not sure what you are asking about, but I can try to rephrase: This
 bug is difficult to fix because some pinned connections should be reused
 and some should not be. Pinned connections that can be re-pinned but
 have not had any HTTP requests sent on them should be reused, even for
 unretriable requests. SslBump creates such connections in forward.cc
 when Squid connects to the origin server to peak at the server
 certificate. Since no HTTP requests were sent on such connections at the
 decision time, this is not really a reuse even though it looks like one
 in all other aspects.

It is. You must take care to not reuse a slightly old (1s or so)
connection under those conditions.

  Which it quite likely the wrong thing to do. See above.
 
 Does the !flags.canRePin exception address your concern?

Yes, if used where needed (TPROXY, NTLM).

Regards
Henrk



Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-01 Thread Alex Rousskov
On 12/01/2012 11:20 AM, Henrik Nordström wrote:
 fre 2012-11-30 klockan 23:07 -0700 skrev Alex Rousskov:
 Does the !flags.canRePin exception address your concern?

 Yes, if used where needed (TPROXY, NTLM).

By default, the canRePin flag is not set and pinned connections are
reused, even for unretriable requests. Thus, bare TPROXY and NTLM code
should be fine without any special/additional changes.

However, a combination of TPROXY and SslBump will see the canRePin flag
set (by the SslBump code). We have not heard complaints that the combo
does not work even though recent SslBump code reopened and repinned
closed server-side connections. Perhaps those bug reports are yet to
come. Why can't TPROXY reopen a server-side connection?


Thank you,

Alex.
P.S. Even if we do not handle TPROXY+SslBump combo correctly today, the
required fix will be outside the proposed patch. The patch is still
needed to handle cases where pinned connections can be reopened and
repinned.



Re: [PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Alex Rousskov
On 12/01/2012 06:34 AM, Eliezer Croitoru wrote:
 - For now I am not changing the mem-obj-url and leave the branch as it
 is since there is no need for it now.
 Once the feature will be fine The change will be placed.

That is fine, but please note that the feature cannot be committed until
that renaming change is implemented so that you can verify that all
other uses of mem_obj::url are still correct even though that member is
no longer a URL but a store entry ID.


 +const char *store_url = urlStoreUrl(request);
  SquidMD5_CTX M;
  SquidMD5Init(M);
  SquidMD5Update(M, m, sizeof(m));
 -SquidMD5Update(M, (unsigned char *) url, strlen(url));
 +if (store_url)
 + SquidMD5Update(M, (unsigned char *) store_url, strlen(store_url));
 +else
 + SquidMD5Update(M, (unsigned char *) url, strlen(url));


 +const char *
 +urlStoreUrl(HttpRequest * request)
 +{
 +   if (request-store_url)
 +return xstrdup(request-store_url);
 +   else
 +return NULL;
 +}

The combination leaks urlStoreUrl() result as far as I can tell. Besides
that bug, there are several design issues with this:

* Please do not add urlStoreUrl(request). Add a constant HttpRequest
method instead.

* Please do not dupe the url inside urlStoreUrl().

* Please always return a URL for Store from urlStoreUrl(). There is
always such a url, regardless of configuration (it is either the
requested URL or the rewritten one).


 +static helper *storeurls = NULL;

Please use camelCase for variable names unless underline_based_names are
needed for consistency reasons.

Please document all new variables, functions, and methods, including
this one.



 +if (http-request-store_url){
 + assert(http-request-store_url);

The assert is not needed.


 -StoreEntry *e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
 +StoreEntry *e;
 +if (http-request-store_url){
 + assert(http-request-store_url);
 + e = storeCreateEntry(http-request-store_url, http-log_uri, reqFlags, 
 m);
 +}
 +else{
 + e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
 +}

To avoid code duplication, compute the URL (the first parameter) and
then call storeCreateEntry with the right first parameter.

BTW, to compute the first parameter, you may be able to just call
HttpRequest::storeUrl() method discussed above (if it always returns a
URL for Store).


 +int storeurl_state;

 +uint8_t storeurl_fail_count;

The use of underline_based_names is correct here, but you need to add an
underline between the store and url words as well.

 +// reset state flag to try redirector again from scratch.
 +storeurl_rewrite_done = false;

 +case HelperReply::Okay: {
 +Note::Pointer urlNote = reply.notes.find(rewrite-url);

Can we unify these parts of redirector and store URL rewriting code to
avoid code duplication and bugs that it brings, like the ones above? Is
ClientRequestContext::clientStoreUrlDone() that much different from
ClientRequestContext::clientRedirectDone() that you cannot merge their
common parts?

 +// XXX: put the the storeURL value somewhere.
 +http-request-store_url= xstrdup(urlNote-firstValue());

What does this XXX mean? Does not this code put the storeURL value
somewhere already?

You may want to either assert that http-request-store_url is not NULL
when this code runs or free the old value of the store_url. This should
probably be done by an HttpRequest setter method though.

 +Note::Pointer urlNote = reply.notes.find(rewrite-url);

Make urlNote const if you can.


Please search your patch for tabulation characters in sources and
replace the ones you added with spaces.


Finally, I suggest using store ID instead of store URL in all
internal names and code comments. It is a good idea to remind us that
there is nothing URL about this opaque string.

I would even argue that squid.conf directive should be called
differently to emphasize that this is a URL:ID mapping feature rather
than URL rewriting feature (I do not think we need to maintain backward
compatibility with Squid2 unfortunate naming here).


Thank you,

Alex.



Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-01 Thread Alex Rousskov
On 11/30/2012 08:27 PM, Amos Jeffries wrote:
 On 1/12/2012 1:31 p.m., Henrik Nordström wrote:
 fre 2012-11-30 klockan 15:30 -0700 skrev Alex Rousskov:

  Squid is sending POST requests on reused pinned connections, and
 some of those requests fail due to a pconn race, with no possibility for
 a retry.
 Yes... and we have to for NTLM, TPROXY and friends or they get in a bit
 of trouble from connection state mismatch.

 If sending the request fails we should propagate this to the client by
 resetting the client connection and let the client retry.
 
 It seems to me we are also forced to do this for ssl-bump connections.

The current code does not force it. From user experience point of view,
resetting the client connection is usually the wrong choice in this
context (it breaks sites/transactions that work correctly with the
current code). Thus, if we disagree whether this should be the default
behavior, somebody will need to add a knob to control it from squid.conf.


  * Opening a new connection is the wrong thing to do for server-first
 bumped connections, where the new connection MAY go to a completely
 different server than the one whose certificate was bumped with. We
 control the IP:port we connect to, but we cannot control IP-level load
 balancers existence.

Not exactly. Opening a new connection MIGHT be wrong (not IS wrong). In
all real use cases we know of today, it is actually the right thing to
do. So far, we have found no cases where a load balancer or a similar
device sends the request to a semantically different server. I am not
saying such cases do not exist, but they are definitely not the norm.

Please note that Squid checks the certificate of the new server when the
connection is repinned. We do not just trust that the new server
certificate is going to be similar. IIRC, this has been discussed in the
past, when bump-server-first patches were reviewed.


  * client-first bumped connections do not face the lag, *BUT* there is
 no way to identify them at forwarding time separately from server-first
 bumped.

The lag is not the primary issue here. Pconn races are. All SslBump
connections suffer from those.

We can discuss the lag separately. Since reconnecting after 1 second lag
may face the same lag, it may not really help. I would not object to
adding an optional feature to reconnect after X second lag, but I
recommend waiting for a real use case to justify that complication (and
to help us detail the fix correctly).



  * we are pretending to be a dumb relay - which offers the ironclad
 guarantee that the server at the other end is a single TCP endpoint (DNS
 uncertainty is only on the initial setup. Once connected packets reach
 *an* endpoint they all do or the connection dies).

Conceptually, yes. In practice, we break this popular site reasons
require us to make implementation adjustments to that nice theoretical
concept of a dumb relay.


 We can control the outgoing IP:port details, but have no control over
 the existence of IP-level load balancers which can screw with the
 destination server underneath us. Gambling on the destination not
 changing on an HTTPS outbound when retrying for intercepted traffic will
 re-opening at least two CVE issues 3.2 is supposed to be immune to
 (CVE-2009-0801 and CVE-2009-3555).

We do not gamble. We verify that the new destination has a similar
certificate.


 Races are also still very possible on server-bumped connections if for
 any reason it takes longer to receive+parse+adapt+reparse the client
 request than the server wants to wait for.

Yes, but that would not be a pconn race. Let's not enlarge the scope of
the problem in the context of this patch review. The proposed patch
fixes pconn races (with real use cases behind the fix!). It does not
pretend to fix something else.


 A straight usage counter is deftinitely the wrong thing to use to
 control this whether or not you agree with us that re-trying outbound
 connections is safe after guaranteeing teh clietn (with encryption
 certificate no less) that a single destinatio has been setup. What is
 needed is a suitable length idle timeout and a close handler.

The usage counter is the right approach to detect pconn races, by
definition: A pconn race happens _after_ the first HTTP connection use.
The counter simply helps us detect when we have used the HTTP connection
at least once.


 The issue is not that the conn was used then pooled versus pinned. The
 issue is that async period between last and current packet on the socket
 - we have no way to identify if the duration between has caused problems
 (crtd, adaptation or ACL lag might be enough to die from some race with
 NAT timeouts). Whether that past use was the SSL exchange (server-bump
 only) or a previous HTTP data packet.

I am not trying to fix all possible race conditions. The patch is trying
to fix just one: an HTTP pconn race. Other fixes would be welcomed, of
course, especially if backed by real use cases/need.



 I agree 

Build failed in Jenkins: 3.2-matrix » master #282

2012-12-01 Thread noc
See http://build.squid-cache.org/job/3.2-matrix/./label=master/282/changes

Changes:

[Amos Jeffries] ext_file_userip_acl: Remove duplicate variable definition

--
[...truncated 3784 lines...]
mv -f .deps/libltdlc_la-slist.Tpo .deps/libltdlc_la-slist.Plo
/bin/sh ./libtool --tag=CC   --mode=link gcc  -g -O2 -module -avoid-version  -o 
dlopen.la  dlopen.lo -ldl -ldl 
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../libltdl 
-DLT_CONFIG_H=config.h -DLTDL -I. -I../../libltdl -Ilibltdl 
-I../../libltdl/libltdl -I../../libltdl/libltdl -g -O2 -MT lt__strl.lo -MD -MP 
-MF .deps/lt__strl.Tpo -c ../../libltdl/lt__strl.c  -fPIC -DPIC -o 
.libs/lt__strl.o
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../libltdl 
-DLT_CONFIG_H=config.h -DLTDL -I. -I../../libltdl -Ilibltdl 
-I../../libltdl/libltdl -I../../libltdl/libltdl -g -O2 -MT lt__strl.lo -MD -MP 
-MF .deps/lt__strl.Tpo -c ../../libltdl/lt__strl.c -o lt__strl.o /dev/null 21
libtool: link: ar cru .libs/dlopen.a .libs/dlopen.o 
libtool: link: ranlib .libs/dlopen.a
libtool: link: ( cd .libs  rm -f dlopen.la  ln -s ../dlopen.la 
dlopen.la )
mv -f .deps/lt__strl.Tpo .deps/lt__strl.Plo
/bin/sh ./libtool --tag=CC   --mode=link gcc  -g -O2 -no-undefined -dlpreopen 
dlopen.la   -o libltdlc.la  libltdlc_la-preopen.lo libltdlc_la-lt__alloc.lo 
libltdlc_la-lt_dlloader.lo libltdlc_la-lt_error.lo libltdlc_la-ltdl.lo 
libltdlc_la-slist.lo lt__strl.lo -ldl 
libtool: link: rm -f .libs/libltdlc.nm .libs/libltdlc.nmS .libs/libltdlc.nmT
libtool: link: (cd .libs  gcc -g -O2 -c -fno-builtin  -fPIC -DPIC 
libltdlcS.c)
libtool: link: rm -f .libs/libltdlcS.c .libs/libltdlc.nm 
.libs/libltdlc.nmS .libs/libltdlc.nmT
libtool: link: (cd .libs/libltdlc.lax/dlopen.a  ar x 
http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/libltdl/./.libs/dlopen.a;)
libtool: link: ar cru .libs/libltdlc.a .libs/libltdlc_la-preopen.o 
.libs/libltdlc_la-lt__alloc.o .libs/libltdlc_la-lt_dlloader.o 
.libs/libltdlc_la-lt_error.o .libs/libltdlc_la-ltdl.o .libs/libltdlc_la-slist.o 
.libs/lt__strl.o .libs/libltdlcS.o  .libs/libltdlc.lax/dlopen.a/dlopen.o 
libtool: link: ranlib .libs/libltdlc.a
libtool: link: rm -fr .libs/libltdlc.lax
libtool: link: ( cd .libs  rm -f libltdlc.la  ln -s ../libltdlc.la 
libltdlc.la )
make[3]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/libltdl'
make[2]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/libltdl'
Making all in scripts
make[2]: Entering directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/scripts'
make[2]: Nothing to be done for `all'.
make[2]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/scripts'
Making all in icons
make[2]: Entering directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/icons'
make[2]: Nothing to be done for `all'.
make[2]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/icons'
Making all in errors
make[2]: Entering directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/errors'
case  in \
off) \
echo WARNING: Translation is disabled.; \
;; \
|no) \
echo WARNING: Translation toolkit was not detected.; \
;; \
esac; \
touch translate-warn
WARNING: Translation toolkit was not detected.
make[2]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/errors'
Making all in doc
make[2]: Entering directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/doc'
Making all in manuals
make[3]: Entering directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/doc/manuals'
make[3]: Nothing to be done for `all'.
make[3]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/doc/manuals'
make[3]: Entering directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/doc'
make[3]: Nothing to be done for `all-am'.
make[3]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/doc'
make[2]: Leaving directory 
`http://build.squid-cache.org/job/3.2-matrix/./label=master/ws/btlayer-00-default/squid-3.2.3-BZR/_build/doc'
Making all in helpers
make[2]: Entering directory 

Re: mapping of ecap LogVerbosity to debug_options levels

2012-12-01 Thread Alex Rousskov
On 11/30/2012 08:49 PM, carteriii wrote:
 The logging functionality of the ecap adapter allows users to specify a
 LogVerbosity which is a mask of Importance, Frequency, and Message Size:
 
 enum ImportanceLevel { ilDebug = 0, ilNormal = 1, ilCritical = 2 }; // 0xF
 enum FrequencyLevel { flOperation = 0, flXaction = 1  4, flApplication = 2
  4}; // 0xF0
 enum MessageSizeLevel { mslNormal = 0, mslLarge = 1  8 }; // 0xF00
 
 Could someone tell me how these relate to Squid's debug_options levels of
 1-9?

The mapping from libecap::LogVerbosity to Squid debugging levels is done
by the following Squid function in src/adaptation/ecap/Host.cc:


 static int
 SquidLogLevel(libecap::LogVerbosity lv)
 {
 if (lv.critical())
 return DBG_CRITICAL; // is it a good idea to ignore other flags?
 
 if (lv.large())
 return DBG_DATA; // is it a good idea to ignore other flags?
 
 if (lv.application())
 return DBG_IMPORTANT; // is it a good idea to ignore other flags?
 
 return 2 + 2*lv.debugging() + 3*lv.operation() + 2*lv.xaction();
 }

The mapping algorithm is far from perfect, of course, especially the
last line. Improvements are welcomed, but please keep it simple.


HTH,

Alex.



Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-01 Thread Alex Rousskov
Hello,

It just occurred to me that there is nothing wrong, from theoretical
protocol point of view, with reseting the client connection when a
server-side race condition is detected _and_ we know that the client was
the one that caused Squid to open the server connection. I personally do
not know whether there are use cases that would be broken by that.

To summarize, our choices for a pinned connection created for the
current client are:

Before sending the request:

1. Reopen and repin used pconns to send unretriable requests,
   just like non-pinned connection code does. This eliminates
   HTTP pconn race conditions for unretriable requests. This is
   what the proposed patch does for SslBump.

2. Send all requests without reopening the pinned connection.
   If we encounter an HTTP pconn race condition, see below.


After the request, if an HTTP pconn race happens:

0. Send an error message to the client.
   This is what we do today and it breaks things.

1. Reset the client connection.
   Pretend to be a dumb tunnel.

2. Retry, just like the current non-pinned connection code does.
   This is what the proposed patch implements.


Current code does 2+0. That does not work.

The proposed patch does 1+2. That works in my limited tests.

Henrik suggested 2+1. I agree that it is also an option worth
considering. Will it break actual clients? I do not know, but:

a) If we are talking about unretriable requests, the client already
violated HTTP by sending such a request over the persistent client
connection. Some of these clients may fail to handle resets properly.

b) To implement 2+1 correctly, Squid will need to close the _client_
connection when the origin server sends Connection: close. We do not
do that now IIRC.


Thank you,

Alex.



Build failed in Jenkins: 3.HEAD-amd64-CentOS-5.3 #2165

2012-12-01 Thread noc
See http://build.squid-cache.org/job/3.HEAD-amd64-CentOS-5.3/2165/changes

Changes:

[Automatic source maintenance] SourceFormat Enforcement

--
[...truncated 7833 lines...]
373b802000-373ba01000 ---p 2000 fd:00 130496 
/lib64/libcom_err.so.2.1
373ba01000-373ba02000 rw-p 1000 fd:00 130496 
/lib64/libcom_err.so.2.1
373bc0-373bc02000 r-xp  fd:00 130495 
/lib64/libkeyutils-1.2.so
373bc02000-373be01000 ---p 2000 fd:00 130495 
/lib64/libkeyutils-1.2.so
373be01000-373be02000 rw-p 1000 fd:00 130495 
/lib64/libkeyutils-1.2.so
373c00-373c02c000 r-xp  fd:00 244769 
/usr/lib64/libgssapi_krb5.so.2.2
373c02c000-373c22c000 ---p 0002c000 fd:00 244769 
/usr/lib64/libgssapi_krb5.so.2.2
373c22c000-373c22e000 rw-p 0002c000 fd:00 244769 
/usr/lib64/libgssapi_krb5.so.2.2
373c40-373c424000 r-xp  fd:00 244767 
/usr/lib64/libk5crypto.so.3.1
373c424000-373c623000 ---p 00024000 fd:00 244767 
/usr/lib64/libk5crypto.so.3.1
373c623000-373c625000 rw-p 00023000 fd:00 244767 
/usr/lib64/libk5crypto.so.3.1
373c80-373c808000 r-xp  fd:00 235112 
/usr/lib64/libkrb5support.so.0.1
373c808000-373ca07000 ---p 8000 fd:00 235112 
/usr/lib64/libkrb5support.so.0.1
373ca07000-373ca08000 rw-p 7000 fd:00 235112 
/usr/lib64/libkrb5support.so.0.1
373cc0-373cc91000 r-xp  fd:00 244768 
/usr/lib64/libkrb5.so.3.3
373cc91000-373ce91000 ---p 00091000 fd:00 244768 
/usr/lib64/libkrb5.so.3.3
373ce91000-373ce95000 rw-p 00091000 fd:00 244768 
/usr/lib64/libkrb5.so.3.3
2b741da91000-2b741da93000 rw-p 2b741da91000 00:00 0 
2b741daa5000-2b741daef000 r-xp  fd:00 240454 
/usr/lib64/libcppunit-1.12.so.1.0.0
2b741daef000-2b741dcef000 ---p 0004a000 fd:00 240454 
/usr/lib64/libcppunit-1.12.so.1.0.0
2b741dcef000-2b741dcf2000 rw-p 0004a000 fd:00 240454 
/usr/lib64/libcppunit-1.12.so.1.0.0
2b741dcf2000-2b741dd3c000 rw-p 2b741dcf2000 00:00 0 
7fffefcff000-7fffefd15000 rw-p 7ffe8000 00:00 0  [stack]
7fffefdf4000-7fffefdf8000 r-xp 7fffefdf4000 00:00 0  [vdso]
ff60-ffe0 ---p  00:00 0  
[vsyscall]
/bin/sh: line 4: 16522 Aborted ${dir}$tst
FAIL: tests/testHttpRequest
SKIP: cache_cf.cc operator void* (not implemented).
...
OK (11)
PASS: tests/testStore

OK (4)
PASS: tests/testString
stub time| persistent connection module initialized
..
OK (10)
PASS: tests/testURL
.
OK (1)
PASS: tests/testConfigParser
...
OK (3)
PASS: tests/testStatHist
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: tools.cc UsingSmp (not implemented).
SKIP: tools.cc InDaemonMode (not implemented).
SKIP: tools.cc UsingSmp (not implemented).
SKIP: tools.cc InDaemonMode (not implemented).
SKIP: cache_cf.cc operator void* (not implemented).
SKIP: tools.cc UsingSmp (not implemented).
SKIP: tools.cc InDaemonMode (not implemented).
SKIP: tools.cc UsingSmp (not implemented).
SKIP: stub_store_rebuild.cc storeRebuildComplete (not implemented).
SKIP: tools.cc InDaemonMode (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: tools.cc InDaemonMode (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: tools.cc InDaemonMode (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: StatHist.cc count (not implemented).
SKIP: 

Re: [PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Amos Jeffries

On 2/12/2012 8:27 a.m., Alex Rousskov wrote:

On 12/01/2012 06:34 AM, Eliezer Croitoru wrote:

- For now I am not changing the mem-obj-url and leave the branch as it
is since there is no need for it now.
Once the feature will be fine The change will be placed.

That is fine, but please note that the feature cannot be committed until
that renaming change is implemented so that you can verify that all
other uses of mem_obj::url are still correct even though that member is
no longer a URL but a store entry ID.



+const char *store_url = urlStoreUrl(request);
  SquidMD5_CTX M;
  SquidMD5Init(M);
  SquidMD5Update(M, m, sizeof(m));
-SquidMD5Update(M, (unsigned char *) url, strlen(url));
+if (store_url)
+   SquidMD5Update(M, (unsigned char *) store_url, strlen(store_url));
+else
+   SquidMD5Update(M, (unsigned char *) url, strlen(url));



+const char *
+urlStoreUrl(HttpRequest * request)
+{
+   if (request-store_url)
+  return xstrdup(request-store_url);
+   else
+  return NULL;
+}

The combination leaks urlStoreUrl() result as far as I can tell. Besides
that bug, there are several design issues with this:

* Please do not add urlStoreUrl(request). Add a constant HttpRequest
method instead.

* Please do not dupe the url inside urlStoreUrl().

* Please always return a URL for Store from urlStoreUrl(). There is
always such a url, regardless of configuration (it is either the
requested URL or the rewritten one).




+static helper *storeurls = NULL;

Please use camelCase for variable names unless underline_based_names are
needed for consistency reasons.

Please document all new variables, functions, and methods, including
this one.




+if (http-request-store_url){
+   assert(http-request-store_url);

The assert is not needed.



-StoreEntry *e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+StoreEntry *e;
+if (http-request-store_url){
+   assert(http-request-store_url);
+   e = storeCreateEntry(http-request-store_url, http-log_uri, reqFlags, 
m);
+}
+else{
+   e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+}

To avoid code duplication, compute the URL (the first parameter) and
then call storeCreateEntry with the right first parameter.

BTW, to compute the first parameter, you may be able to just call
HttpRequest::storeUrl() method discussed above (if it always returns a
URL for Store).



+int storeurl_state;
+uint8_t storeurl_fail_count;

The use of underline_based_names is correct here, but you need to add an
underline between the store and url words as well.


+// reset state flag to try redirector again from scratch.
+storeurl_rewrite_done = false;
+case HelperReply::Okay: {
+Note::Pointer urlNote = reply.notes.find(rewrite-url);

Can we unify these parts of redirector and store URL rewriting code to
avoid code duplication and bugs that it brings, like the ones above? Is
ClientRequestContext::clientStoreUrlDone() that much different from
ClientRequestContext::clientRedirectDone() that you cannot merge their
common parts?


They are. The whole of the OK result handling is completely different, 
as well as the context handling conditions after the switch case. Also 
it is important that the context members altered for BH case error 
handling are different to prevent a BH from one helper affecting the 
other helper.


The old-new format conversion is moved to redirect.cc and shared 
already via my helper upgrade stage 3 patch.





+// XXX: put the the storeURL value somewhere.
+http-request-store_url= xstrdup(urlNote-firstValue());

What does this XXX mean? Does not this code put the storeURL value
somewhere already?


Was a note from me to Elizeir which should have been removed.


You may want to either assert that http-request-store_url is not NULL
when this code runs or free the old value of the store_url. This should
probably be done by an HttpRequest setter method though.


+Note::Pointer urlNote = reply.notes.find(rewrite-url);

Make urlNote const if you can.


Please search your patch for tabulation characters in sources and
replace the ones you added with spaces.


Finally, I suggest using store ID instead of store URL in all
internal names and code comments. It is a good idea to remind us that
there is nothing URL about this opaque string.

I would even argue that squid.conf directive should be called
differently to emphasize that this is a URL:ID mapping feature rather
than URL rewriting feature (I do not think we need to maintain backward
compatibility with Squid2 unfortunate naming here).


Correct we do not.


Amos


Re: [PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Eliezer Croitoru

On 12/1/2012 9:27 PM, Alex Rousskov wrote:

On 12/01/2012 06:34 AM, Eliezer Croitoru wrote:

- For now I am not changing the mem-obj-url and leave the branch as it
is since there is no need for it now.
Once the feature will be fine The change will be placed.


That is fine, but please note that the feature cannot be committed until
that renaming change is implemented so that you can verify that all
other uses of mem_obj::url are still correct even though that member is
no longer a URL but a store entry ID.

I understand it but it's not true.
I did verification before writing the code.
I'm not the bad-ass reviewer but I am sure about most of the 
mem_obj::url usage.


If you can take a look at the patch again and see some place that the 
store_url can affect other then debug sections I will be happy to hear 
about it so we can thing again.


One of my problems is that it will be hard for me to test changes if I 
wont have a solid ground in the branch.


If you have a good way to help me test it while not having the need to 
make huge changes before a diff\patch I will be happy to learn.






+const char *store_url = urlStoreUrl(request);
  SquidMD5_CTX M;
  SquidMD5Init(M);
  SquidMD5Update(M, m, sizeof(m));
-SquidMD5Update(M, (unsigned char *) url, strlen(url));
+if (store_url)
+   SquidMD5Update(M, (unsigned char *) store_url, strlen(store_url));
+else
+   SquidMD5Update(M, (unsigned char *) url, strlen(url));




+const char *
+urlStoreUrl(HttpRequest * request)
+{
+   if (request-store_url)
+  return xstrdup(request-store_url);
+   else
+  return NULL;
+}


The combination leaks urlStoreUrl() result as far as I can tell. Besides
that bug, there are several design issues with this:

Changed.



* Please do not add urlStoreUrl(request). Add a constant HttpRequest
method instead.

Done.



* Please do not dupe the url inside urlStoreUrl().

OK.


* Please always return a URL for Store from urlStoreUrl(). There is
always such a url, regardless of configuration (it is either the
requested URL or the rewritten one).

Ok, got it.




+static helper *storeurls = NULL;


Please use camelCase for variable names unless underline_based_names are
needed for consistency reasons.

Please document all new variables, functions, and methods, including
this one.


Done.




+if (http-request-store_url){
+   assert(http-request-store_url);


The assert is not needed.


OK.




-StoreEntry *e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+StoreEntry *e;
+if (http-request-store_url){
+   assert(http-request-store_url);
+   e = storeCreateEntry(http-request-store_url, http-log_uri, reqFlags, 
m);
+}
+else{
+   e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+}


To avoid code duplication, compute the URL (the first parameter) and
then call storeCreateEntry with the right first parameter.

Ok. Hope it's what you meant



BTW, to compute the first parameter, you may be able to just call
HttpRequest::storeUrl() method discussed above (if it always returns a
URL for Store).

Well the question here is how the http-uri is computed:
I am not sure that always the uri and canonical uri that I am using on 
the request are the same.

I will check it but if someone knows I will wait for the word.




+int storeurl_state;



+uint8_t storeurl_fail_count;


The use of underline_based_names is correct here, but you need to add an
underline between the store and url words as well.


+// reset state flag to try redirector again from scratch.
+storeurl_rewrite_done = false;



+case HelperReply::Okay: {
+Note::Pointer urlNote = reply.notes.find(rewrite-url);


Can we unify these parts of redirector and store URL rewriting code to
avoid code duplication and bugs that it brings, like the ones above? Is
ClientRequestContext::clientStoreUrlDone() that much different from
ClientRequestContext::clientRedirectDone() that you cannot merge their
common parts?


Amos gave me some notes about it last time and I hope it's fine now.
Anyway this is out of the scope of my knowledge\capabilities right now.
Maybe Amos?




+// XXX: put the the storeURL value somewhere.
+http-request-store_url= xstrdup(urlNote-firstValue());


What does this XXX mean? Does not this code put the storeURL value
somewhere already?

A note Amos left for me in the surrounding code(he did me a favor).
Fixed.



You may want to either assert that http-request-store_url is not NULL
when this code runs or free the old value of the store_url. This should
probably be done by an HttpRequest setter method though.
It should be the HttpRequest thing since I need this value to be 
available later as long the request exists.

Done.




+Note::Pointer urlNote = reply.notes.find(rewrite-url);


Make urlNote const if you can.


Please search your patch for tabulation characters in sources and
replace the 

Build failed in Jenkins: 3.1-matrix » obsd-49-x86 #154

2012-12-01 Thread noc
See http://build.squid-cache.org/job/3.1-matrix/./label=obsd-49-x86/154/

--
Started by upstream project 3.1-matrix build number 154
originally caused by: 
Started by user Amos Jeffries
Building remotely on obsd-49-x86 in workspace 
http://build.squid-cache.org/job/3.1-matrix/./label=obsd-49-x86/ws/
java.io.IOException: Failed to mkdirs: 
http://build.squid-cache.org/job/3.1-matrix/./label=obsd-49-x86/ws/
at hudson.FilePath.mkdirs(FilePath.java:973)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1306)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:679)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:584)
at hudson.model.Run.execute(Run.java:1516)
at hudson.matrix.MatrixRun.run(MatrixRun.java:146)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 
http://build.squid-cache.org/job/3.1-matrix/./label=obsd-49-x86/ws/
at hudson.FilePath.mkdirs(FilePath.java:973)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1306)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:679)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:584)
at hudson.model.Run.execute(Run.java:1516)
at hudson.matrix.MatrixRun.run(MatrixRun.java:146)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 
http://build.squid-cache.org/job/3.1-matrix/./label=obsd-49-x86/ws/
at hudson.FilePath.mkdirs(FilePath.java:973)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1306)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:679)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:584)
at hudson.model.Run.execute(Run.java:1516)
at hudson.matrix.MatrixRun.run(MatrixRun.java:146)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)


Re: [PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Eliezer Croitoru



On 12/2/2012 8:20 AM, Amos Jeffries wrote:

On 2/12/2012 8:27 a.m., Alex Rousskov wrote:

On 12/01/2012 06:34 AM, Eliezer Croitoru wrote:

SNIP

+static helper *storeurls = NULL;

Please use camelCase for variable names unless underline_based_names are
needed for consistency reasons.

Please document all new variables, functions, and methods, including
this one.





+if (http-request-store_url){
+assert(http-request-store_url);

The assert is not needed.





-StoreEntry *e = storeCreateEntry(http-uri, http-log_uri,
reqFlags, m);
+StoreEntry *e;
+if (http-request-store_url){
+assert(http-request-store_url);
+e = storeCreateEntry(http-request-store_url,
http-log_uri, reqFlags, m);
+}
+else{
+e = storeCreateEntry(http-uri, http-log_uri, reqFlags, m);
+}

To avoid code duplication, compute the URL (the first parameter) and
then call storeCreateEntry with the right first parameter.

BTW, to compute the first parameter, you may be able to just call
HttpRequest::storeUrl() method discussed above (if it always returns a
URL for Store).



+int storeurl_state;
+uint8_t storeurl_fail_count;

The use of underline_based_names is correct here, but you need to add an
underline between the store and url words as well.


+// reset state flag to try redirector again from scratch.
+storeurl_rewrite_done = false;
+case HelperReply::Okay: {
+Note::Pointer urlNote = reply.notes.find(rewrite-url);

Can we unify these parts of redirector and store URL rewriting code to
avoid code duplication and bugs that it brings, like the ones above? Is
ClientRequestContext::clientStoreUrlDone() that much different from
ClientRequestContext::clientRedirectDone() that you cannot merge their
common parts?


They are. The whole of the OK result handling is completely different,
as well as the context handling conditions after the switch case. Also
it is important that the context members altered for BH case error
handling are different to prevent a BH from one helper affecting the
other helper.

The old-new format conversion is moved to redirect.cc and shared
already via my helper upgrade stage 3 patch.




+// XXX: put the the storeURL value somewhere.
+http-request-store_url= xstrdup(urlNote-firstValue());

What does this XXX mean? Does not this code put the storeURL value
somewhere already?


Was a note from me to Elizeir which should have been removed.


Oops :\


You may want to either assert that http-request-store_url is not NULL
when this code runs or free the old value of the store_url. This should
probably be done by an HttpRequest setter method though.


+Note::Pointer urlNote = reply.notes.find(rewrite-url);

Make urlNote const if you can.


Please search your patch for tabulation characters in sources and
replace the ones you added with spaces.


Finally, I suggest using store ID instead of store URL in all
internal names and code comments. It is a good idea to remind us that
there is nothing URL about this opaque string.

I would even argue that squid.conf directive should be called
differently to emphasize that this is a URL:ID mapping feature rather
than URL rewriting feature (I do not think we need to maintain backward
compatibility with Squid2 unfortunate naming here).


Correct we do not.
OK, So If there is no objection from anyone else in the next few days I 
will change the whole naming convention to StoreID.


Eliezer


Re: [PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Amos Jeffries

On 2/12/2012 7:22 p.m., Eliezer Croitoru wrote:

On 12/1/2012 9:27 PM, Alex Rousskov wrote:

On 12/01/2012 06:34 AM, Eliezer Croitoru wrote:

BTW, to compute the first parameter, you may be able to just call
HttpRequest::storeUrl() method discussed above (if it always returns a
URL for Store).

Well the question here is how the http-uri is computed:
I am not sure that always the uri and canonical uri that I am using on 
the request are the same.

I will check it but if someone knows I will wait for the word.


I've done some separate work on cleaning up the URL handling. As far as 
I could tell from that http-url is/was supposed to be the canonical URL 
for the request from before any adaptation happened. For logging the 
client request-line. As such it seems far safer to use the HttpReqeust 
URL via urlCanonical() whenever possible, that way you pick up any 
changes the URL-rewrite and adaptation did.




Finally, I suggest using store ID instead of store URL in all
internal names and code comments. It is a good idea to remind us that
there is nothing URL about this opaque string.

I would even argue that squid.conf directive should be called
differently to emphasize that this is a URL:ID mapping feature rather
than URL rewriting feature (I do not think we need to maintain backward
compatibility with Squid2 unfortunate naming here).
It's fine by me but I am almost sure that backwards compatibility is 
wanted.
There was a change about the redirector and url_rewrite in the past if 
I'm right.
This kind of a change will remind the admins that the feature works 
differently from the old store_url_rewrite.


Don't worry about that. Just place the old directive name as a second 
word on the NAME: line in cf.data.pre. The config generator will take 
care of the upgrade notices etc and accept both names. The code in 
redirect.cc is already taking care of helper protocol backward 
compatibility.


Amos


Re: [PATCH] stage 2 of store url as fake store_url helper

2012-12-01 Thread Eliezer Croitoru

On 12/2/2012 8:41 AM, Amos Jeffries wrote:

Don't worry about that. Just place the old directive name as a second
word on the NAME: line in cf.data.pre. The config generator will take
care of the upgrade notices etc and accept both names. The code in
redirect.cc is already taking care of helper protocol backward
compatibility.


Nice!!

If I will have time to day I even try to test the whole thing with a 
real helper and real world scenario since this patch is much more stable 
then any other I have worked on.


Eliezer


Re: [PATCH] Do not send unretriable requests on reused pinned connections

2012-12-01 Thread Henrik Nordström
lör 2012-12-01 klockan 11:42 -0700 skrev Alex Rousskov:
 come. Why can't TPROXY reopen a server-side connection?

Because it tries to use the original source ip:port, but it's in
TIME_WAIT from the previous connection and can't be reused.

Regards
Henrik