Re: [squid-dev] [PATCH] Retry cache peer DNS failures more frequently

2016-06-23 Thread Nathan Hoad
Hello,

Did anyone have any thoughts on the issues I had with this? I don't want
this to slip through the cracks :)

Thank you,

Nathan.

On 17 May 2016 at 15:57, Nathan Hoad <nat...@getoffmalawn.com> wrote:

> Hello,
>
> Attached is a patch which makes the changes recommended by Amos - each
> peer now gets its own event for retrying resolution, dependent on the DNS
> TTL. This should also fix up the concerns up by Alex. A few caveats though:
>
>  - the cache manager shows generic "peerRefreshDNS" names for each event.
> I can't find any examples that give it a dynamic name, e.g. I'd like
> something like "peerRefreshDNS(example.com)", but I can't think of how
> I'd do that without leaking memory or making some significant changes to
> the event handler system.
>
> - I can't figure out how to reproduce the second failure case, where a
> result comes back but it has no IP addresses. I _think_ using the TTL would
> be valid instead of negative_dns_ttl would be valid in that situation, but
> I can't be sure. I figured this was the safest option.
>
>  - eventDelete does not appear to be clearing out events as I expect it
> to, so if you reconfigure Squid you end up with some dead events, like so:
>
> [root@xxx ~]# squidmgr events | grep peerRefresh
> Last event to run: peerRefreshDNS
> peerRefreshDNS  0.331 sec   1yes
> peerRefreshDNS  0.679 sec   1yes
> peerRefreshDNS  47.649 sec  1yes
> peerRefreshDNS  61.619 sec  1yes
> peerRefreshDNS  207.682 sec 1yes
> peerRefreshDNS  207.682 sec 1yes
> peerRefreshDNS  207.682 sec 1yes
> peerRefreshDNS  207.682 sec 1yes
> peerRefreshDNS  207.682 sec 1yes
> [root@xxx ~]# squid -k
> reconfigure
> [root@xxx ~]# squidmgr events | grep peerRefresh
> Last event to run: peerRefreshDNS
> peerRefreshDNS  0.763 sec   1yes
> peerRefreshDNS  0.763 sec   1yes
> peerRefreshDNS  41.755 sec  1yes
> peerRefreshDNS  55.755 sec  1yes
> peerRefreshDNS  56.187 sec  1no
> peerRefreshDNS  202.250 sec 1no
> peerRefreshDNS  202.250 sec 1no
> peerRefreshDNS  3599.758 sec1yes
> peerRefreshDNS  3599.758 sec1yes
> peerRefreshDNS  3599.758 sec1yes
> peerRefreshDNS  3599.758 sec1yes
> peerRefreshDNS  3599.758 sec1yes
>
> If I run squid -k reconfigure again, then the events with invalid callback
> data are cleared out, so it doesn't grow indefinitely at least. I'm not
> sure how or if I should fix this.
>
> Thank you,
>
> Nathan.
>
>
> On 10 May 2016 at 18:13, Alex Rousskov <rouss...@measurement-factory.com>
> wrote:
>
>> On 05/10/2016 01:50 AM, Amos Jeffries wrote:
>>
>> > Then each peer gets its own re-lookup event scheduled
>>
>> If applied correctly, this approach would also solve the misapplication
>> problem I described in my concurrent review. Unfortunately, it requires
>> serious work. Fortunately, you have already converted CachePeer from
>> being a POD into a proper class. That will help!
>>
>>
>> Thank you,
>>
>> Alex.
>>
>>
>
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Increase request buffer size to 64kb

2016-06-23 Thread Nathan Hoad
Hello,

A gentle reminder about this patch, I don't want it to get forgotten :)
>From the history I can see, I don't think there's any work remaining, just
some more review.

Thank you,

Nathan.

On 14 April 2016 at 18:52, Amos Jeffries  wrote:

> On 14/04/2016 3:56 p.m., Alex Rousskov wrote:
> > On 04/13/2016 06:29 PM, Amos Jeffries wrote:
> >> On 14/04/2016 4:18 a.m., Alex Rousskov wrote:
> >>> On 04/13/2016 08:22 AM, Amos Jeffries wrote:
>  I am wondering if the class StoreIOBuffer needs to have move
>  constructor/assignment added now that getClientStreamBuffer() is
>  returning a temporary variable by-value.
> >
> >
> >>> Sorry, I do not follow your logic:
> >>>
> >>> 1. AFAIK, move methods are _optimizations_ so any "X needs move
> >>> constructor/assignment" statement does not compute for me in a
> seemingly
> >>> non-optimization context like "getClientStreamBuffer() is returning a
> >>> temporary variable by-value".
> >>>
> >>> 2. AFAIK, StoreIOBuffer already has move constructor and assignment
> >>> operator (provided by the compiler).
> >
> >
> >> I was wondering if we were paying unnecessary cycles doing the object
> >> data copy. If it's provided then thats fine and I'm happy with this if
> you are.
> >
> > Yes, I believe that we can assume that the compiler provides a move
> > constructor for StoreIOBuffer. However, that move constructor does not
> > optimize any "object data copy" AFAICT. StoreIOBuffer is essentially a
> > POD so the entire object (about 20 bytes) gets "copied" and nothing can
> > be "moved" instead, even if we write something by hand. One cannot
> > "move" an integer or a raw pointer!
>
> Well, to be pedantic one can. It means the 'old' object has it set to
> 0/null afterwards. Its just less than optimal in the POD case.
>
> If this class were taking a copy of the free functor and releasing the
> buffer on destruct it would need to be move instead of copy to prevent
> double-free. But thats not happening here.
>
> >
> > This is a new area for me, but I think that "move" methods optimize
> > copying of objects that have non-trivial copy constructors and
> > destructors. By definition, PODs do not have those. StoreIOBuffer does
> > not either. Am I missing something?
>
> Not that I can see. Good point.
>
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
>
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Retry cache peer DNS failures more frequently

2016-05-16 Thread Nathan Hoad
Hello,

Attached is a patch which makes the changes recommended by Amos - each peer
now gets its own event for retrying resolution, dependent on the DNS TTL.
This should also fix up the concerns up by Alex. A few caveats though:

 - the cache manager shows generic "peerRefreshDNS" names for each event. I
can't find any examples that give it a dynamic name, e.g. I'd like
something like "peerRefreshDNS(example.com)", but I can't think of how I'd
do that without leaking memory or making some significant changes to the
event handler system.

- I can't figure out how to reproduce the second failure case, where a
result comes back but it has no IP addresses. I _think_ using the TTL would
be valid instead of negative_dns_ttl would be valid in that situation, but
I can't be sure. I figured this was the safest option.

 - eventDelete does not appear to be clearing out events as I expect it to,
so if you reconfigure Squid you end up with some dead events, like so:

[root@xxx ~]# squidmgr events | grep peerRefresh
Last event to run: peerRefreshDNS
peerRefreshDNS  0.331 sec   1yes
peerRefreshDNS  0.679 sec   1yes
peerRefreshDNS  47.649 sec  1yes
peerRefreshDNS  61.619 sec  1yes
peerRefreshDNS  207.682 sec 1yes
peerRefreshDNS  207.682 sec 1yes
peerRefreshDNS  207.682 sec 1yes
peerRefreshDNS  207.682 sec 1yes
peerRefreshDNS  207.682 sec 1yes
[root@xxx ~]# squid -k
reconfigure
[root@xxx ~]# squidmgr events | grep peerRefresh
Last event to run: peerRefreshDNS
peerRefreshDNS  0.763 sec   1yes
peerRefreshDNS  0.763 sec   1yes
peerRefreshDNS  41.755 sec  1yes
peerRefreshDNS  55.755 sec  1yes
peerRefreshDNS  56.187 sec  1no
peerRefreshDNS  202.250 sec 1no
peerRefreshDNS  202.250 sec 1no
peerRefreshDNS  3599.758 sec1yes
peerRefreshDNS  3599.758 sec1yes
peerRefreshDNS  3599.758 sec1yes
peerRefreshDNS  3599.758 sec1yes
peerRefreshDNS  3599.758 sec1yes

If I run squid -k reconfigure again, then the events with invalid callback
data are cleared out, so it doesn't grow indefinitely at least. I'm not
sure how or if I should fix this.

Thank you,

Nathan.


On 10 May 2016 at 18:13, Alex Rousskov 
wrote:

> On 05/10/2016 01:50 AM, Amos Jeffries wrote:
>
> > Then each peer gets its own re-lookup event scheduled
>
> If applied correctly, this approach would also solve the misapplication
> problem I described in my concurrent review. Unfortunately, it requires
> serious work. Fortunately, you have already converted CachePeer from
> being a POD into a proper class. That will help!
>
>
> Thank you,
>
> Alex.
>
>
Use DNS TTL for deciding when to reschedule resolution of cache peers, and retry failed cache peers much more often.

This improves the situation for configurations where Squid forwards all
requests to a cache peer, and a temporary DNS failure left that cache peer
unresolved. Previously this would would have left users without web access for
an hour, without manual intervention.

A summary:

* Use DNS TTL when scheduling the next resolution attempt, resolving each
  cache peer independently rather than fixed at every hour.

* In the failure case, attempt another resolution according to negative_dns_ttl

* Add the TTL to the Dns::LookupDetails object.

* Fix a small but consistent typo

This work is submitted on behalf of Bloomberg L.P.

=== modified file 'src/FwdState.cc'
--- src/FwdState.cc	2016-03-12 20:27:35 +
+++ src/FwdState.cc	2016-05-11 00:34:29 +
@@ -726,41 +726,41 @@ FwdState::connectedToPeer(Security::Encr
 peerConnectFailed(p);
 serverConnection()->close();
 return;
 }
 
 if (answer.tunneled) {
 // TODO: When ConnStateData establishes tunnels, its state changes
 // [in ways that may affect logging?]. Consider informing
 // ConnStateData about our tunnel or otherwise unifying tunnel
 // establishment [side effects].
 unregister(serverConn); // async call owns it now
 complete(); // destroys us
 return;
 }
 
 // should reach ConnStateData before the dispatched Client job starts
 CallJobHere1(17, 4, request->clientConnectionManager, ConnStateData,
  ConnStateData::notePeerConnection, serverConnection());
 
 if (serverConnection()->getPeer())
-peerConnectSucceded(serverConnection()->getPeer());
+peerConnectSucceeded(serverConnection()->getPeer());
 

[squid-dev] [PATCH] Retry cache peer DNS failures more frequently

2016-05-09 Thread Nathan Hoad
Hello,

Attached is a patch which makes Squid attempt to resolve failed DNS lookups
for cache peers more frequently than an hour. It's user configurable, with
a default of one minute.

As the patch preamble states, this is necessary for setups where all
traffic is forwarded to a cache peer, as a failed DNS lookup will result in
users being unable to access the web until it is resolved successfully.

I'm not convinced the configuration option (cache_peer_negative_dns_ttl) is
the best name, so I'm happy to take suggestions here.

Thank you,

Nathan.
When DNS lookups for cache peers fail, try to resolve them again a little sooner than an hour.

This improves the situation for configurations where Squid forwards all
requests to a cache peer, and a temporary DNS failure left that cache peer
unresolved. Previously this would would have left users without web access for
an hour, without manual intervention.

This also introduces a configuration option for controlling how long to wait,
with a default of one minute.

This work is submitted on behalf of Bloomberg L.P.

=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2016-04-01 17:54:10 +
+++ src/SquidConfig.h	2016-05-04 01:29:07 +
@@ -240,6 +240,7 @@
 wordlist *dns_nameservers;
 CachePeer *peers;
 int npeers;
+time_t cachePeerNegativeDnsTtl;
 
 struct {
 int size;

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2016-04-19 10:40:05 +
+++ src/cf.data.pre	2016-05-04 06:42:41 +
@@ -3552,6 +3552,19 @@
 	instead of to your parents.
 DOC_END
 
+NAME: cache_peer_negative_dns_ttl
+COMMENT: time-units
+TYPE: time_t
+LOC: Config.cachePeerNegativeDnsTtl
+DEFAULT: 1 minutes
+DOC_START
+	How often to retry failed DNS lookups for cache peers.
+	
+	It is not recommended to set this lower than negative_dns_ttl, as the
+	cached negative responses will prevent prevent the DNS lookups from
+	succeeding.
+DOC_END
+
 NAME: forward_max_tries
 DEFAULT: 25
 TYPE: int

=== modified file 'src/neighbors.cc'
--- src/neighbors.cc	2016-01-01 00:12:18 +
+++ src/neighbors.cc	2016-05-04 01:23:18 +
@@ -58,6 +58,7 @@
 #endif
 static void neighborCountIgnored(CachePeer *);
 static void peerRefreshDNS(void *);
+static void peerRefreshFailedDNS(void *);
 static IPH peerDNSConfigure;
 static bool peerProbeConnect(CachePeer *);
 static CNCB peerProbeConnectDone;
@@ -1182,11 +1183,13 @@
 
 if (ia == NULL) {
 debugs(0, DBG_CRITICAL, "WARNING: DNS lookup for '" << p->host << "' failed!");
+eventAdd("peerRefreshFailedDNS", peerRefreshFailedDNS, NULL, Config.cachePeerNegativeDnsTtl, 1);
 return;
 }
 
 if ((int) ia->count < 1) {
 debugs(0, DBG_CRITICAL, "WARNING: No IP address found for '" << p->host << "'!");
+eventAdd("peerRefreshFailedDNS", peerRefreshFailedDNS, NULL, Config.cachePeerNegativeDnsTtl, 1);
 return;
 }
 
@@ -1216,6 +1219,21 @@
 }
 
 static void
+peerRefreshFailedDNS(void *data)
+{
+CachePeer *p = NULL;
+
+if (eventFind(peerRefreshFailedDNS, NULL))
+eventDelete(peerRefreshFailedDNS, NULL);
+
+for (p = Config.peers; p; p = p->next) {
+if (p->in_addr.isAnyAddr()) {
+ipcache_nbgethostbyname(p->host, peerDNSConfigure, p);
+}
+}
+}
+
+static void
 peerRefreshDNS(void *data)
 {
 CachePeer *p = NULL;

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] New error page token for unescaped external ACL output

2016-04-05 Thread Nathan Hoad
Hello,

Attached is a patch that allows using unquoted external ACL output in
error pages. This is useful for external ACLs that return HTML and
thus shouldn't be escaped.

Thank you,

Nathan.
Add a new error page token for unquoted external ACL messages.

This is useful for external ACLs that send back messages that contain actual HTML.

This work is submitted on behalf of Bloomberg L.P.

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2016-04-01 17:54:10 +
+++ src/cf.data.pre	2016-04-05 23:25:41 +
@@ -7950,40 +7950,41 @@ DOC_START
 	e.g. 404:ERR_CUSTOM_ACCESS_DENIED
 
 	Alternatively you can tell Squid to reset the TCP connection
 	by specifying TCP_RESET.
 
 	Or you can specify an error URL or URL pattern. The browsers will
 	get redirected to the specified URL after formatting tags have
 	been replaced. Redirect will be done with 302 or 307 according to
 	HTTP/1.1 specs. A different 3xx code may be specified by prefixing
 	the URL. e.g. 303:http://example.com/
 
 	URL FORMAT TAGS:
 		%a	- username (if available. Password NOT included)
 		%B	- FTP path URL
 		%e	- Error number
 		%E	- Error description
 		%h	- Squid hostname
 		%H	- Request domain name
 		%i	- Client IP Address
 		%M	- Request Method
+		%O	- Unescaped message result from external ACL helper
 		%o	- Message result from external ACL helper
 		%p	- Request Port number
 		%P	- Request Protocol name
 		%R	- Request URL path
 		%T	- Timestamp in RFC 1123 format
 		%U	- Full canonical URL from client
 			  (HTTPS URLs terminate with *)
 		%u	- Full canonical URL from client
 		%w	- Admin email from squid.conf
 		%x	- Error name
 		%%	- Literal percent (%) code
 
 DOC_END
 
 COMMENT_START
  OPTIONS INFLUENCING REQUEST FORWARDING 
  -
 COMMENT_END
 
 NAME: nonhierarchical_direct

=== modified file 'src/errorpage.cc'
--- src/errorpage.cc	2016-04-03 23:41:58 +
+++ src/errorpage.cc	2016-04-05 23:07:56 +
@@ -910,40 +910,42 @@ ErrorState::Convert(char token, bool bui
 case 'm':
 if (building_deny_info_url) break;
 #if USE_AUTH
 if (auth_user_request.getRaw())
 p = auth_user_request->denyMessage("[not available]");
 else
 p = "[not available]";
 #else
 p = "-";
 #endif
 break;
 
 case 'M':
 if (request) {
 const SBuf  = request->method.image();
 mb.append(m.rawContent(), m.length());
 } else if (!building_deny_info_url)
 p = "[unknown method]";
 break;
 
+case 'O':
+do_quote = 0;
 case 'o':
 p = request ? request->extacl_message.termedBuf() : external_acl_message;
 if (!p && !building_deny_info_url)
 p = "[not available]";
 break;
 
 case 'p':
 if (request) {
 mb.appendf("%u", request->url.port());
 } else if (!building_deny_info_url) {
 p = "[unknown port]";
 }
 break;
 
 case 'P':
 if (request) {
 p = request->url.getScheme().c_str();
 } else if (!building_deny_info_url) {
 p = "[unknown protocol]";
 }


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Add chained certificates and signing certificate to bumpAndSpliced connections

2016-04-04 Thread Nathan Hoad
Hi Christos,

I've reattached this patch, adjusted for context - the previous patch
needs quite a bit of fuzzing to apply correctly. I've also included a
commit message that describes the use case it fixes, as discussed in
this email thread. The actual changes and the functionality are the
same.  This should make committing it a little easier.

Thank you,

Nathan.

On 5 March 2016 at 00:31, Dave Lewthwaite
<dave.lewthwa...@realitymine.com> wrote:
> Hi Christos
>
> Were you able to apply this patch to trunk? I’m keen to test it in our set up.
>
> Thanks
>
>
>
>
>
>
>
> On 22/02/2016 16:24, "squid-dev on behalf of Christos Tsantilas" 
> <squid-dev-boun...@lists.squid-cache.org on behalf of chris...@chtsanti.net> 
> wrote:
>
>>It just forgotten.
>>
>>There is one similar patch posted under the mail thread "[PATCH] Include
>>intermediate certs to client when using peek/stare" by Dave Lewthwaite,
>>but it does not cover the case the crtd daemon is used.
>>
>>
>>Nathans patch is OK.
>>If no objections I will apply Nathans patch to trunk.
>>
>>
>>
>>On 02/21/2016 11:57 PM, Nathan Hoad wrote:
>>> Hello,
>>>
>>> I've just started some clean up of local patches in preparation of
>>> upgrading to Squid 4, and I've noticed this hasn't been applied. What
>>> do I need to do to get this applied?
>>>
>>> Thank you,
>>>
>>> Nathan.
>>>
>>> On 19 June 2015 at 18:26, Tsantilas Christos
>>> <chtsa...@users.sourceforge.net> wrote:
>>>> The patch should applied to trunk.
>>>>
>>>>
>>>> On 06/19/2015 04:26 AM, Amos Jeffries wrote:
>>>>>
>>>>> On 7/06/2015 2:41 a.m., Nathan Hoad wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> Attached is a patch making the changes recommended by Christos. I've
>>>>>> done as described, creating a Ssl::configureUnconfiguredSslContext
>>>>>> function, rather than making the changes to Ssl::configureSSL.
>>>>>
>>>>>
>>>>> Christos, can you please review and apply if it is acceptible to you?
>>>>>
>>>>> Cheers
>>>>> Amos
>>>>>
>>___
>>squid-dev mailing list
>>squid-dev@lists.squid-cache.org
>>http://lists.squid-cache.org/listinfo/squid-dev
Add chained certificates and signing certificate to peek-then-bumped connections.

The scenario this patch addresses is when Squid is configured with an
intermediate signing CA certificate, and clients have the root CA installed on
their machines. What happens is that the generated certificates come down with
an unknown issuer (the intermediate signing certificate), with no
intermediates, so they are rejected. By adding the configured certificate chain
as old client-first mode did, the intermediate and root certificates come down
as well, resulting in the issuer being identified and the connection being
established "securely".

This work is submitted on behalf of Bloomberg L.P.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2016-04-03 23:41:58 +
+++ src/client_side.cc	2016-04-05 04:02:26 +
@@ -2850,40 +2850,43 @@ ConnStateData::sslCrtdHandleReply(const
 
 if (reply.result == Helper::BrokenHelper) {
 debugs(33, 5, HERE << "Certificate for " << sslConnectHostOrIp << " cannot be generated. ssl_crtd response: " << reply);
 } else if (!reply.other().hasContent()) {
 debugs(1, DBG_IMPORTANT, HERE << "\"ssl_crtd\" helper returned  reply.");
 } else {
 Ssl::CrtdMessage reply_message(Ssl::CrtdMessage::REPLY);
 if (reply_message.parse(reply.other().content(), reply.other().contentSize()) != Ssl::CrtdMessage::OK) {
 debugs(33, 5, HERE << "Reply from ssl_crtd for " << sslConnectHostOrIp << " is incorrect");
 } else {
 if (reply.result != Helper::Okay) {
 debugs(33, 5, HERE << "Certificate for " << sslConnectHostOrIp << " cannot be generated. ssl_crtd response: " << reply_message.getBody());
 } else {
 debugs(33, 5, HERE << "Certificate for " << sslConnectHostOrIp << " was successfully recieved from ssl_crtd");
 if (sslServerBump && (sslServerBump->act.step1 == Ssl::bumpPeek || sslServerBump->act.step1 == Ssl::bumpStare)) {
 doPeekAndSpliceStep();
 aut

Re: [squid-dev] [PATCH] Increase request buffer size to 64kb

2016-04-04 Thread Nathan Hoad
Hello,

One large reply again. Attached is a patch with all the earlier
changes recommended by Alex. Most notably this removes the change for
read_ahead_gap, leaving the default at 16kb, and opting to use
read_ahead_gap for this rather than introduce a new option.

Alex,

>> Make the size of the buffer on Http::Stream configurable.
>
> s/buffer on Http::Stream/ClientStream buffer/

That would make it inaccurate - there's still one use of
clientStreamInit that does not use this option, in
client_side_request.cc:clientBeginRequest, that works back to
ESISegment::buf, which is a char[HTTP_REQBUF_SZ]. I could convert
ESISegment::buf to a MemBlob and fix the call location in
ESIInclude::Start, but I would not be able to test the code at all, as
I've not used ESI before.

On 4 April 2016 at 10:32, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 03/30/2016 11:50 PM, Nathan Hoad wrote:
>
>> Alex, I've tried 8, 16, 32, 128 and 512 KB values - all sizes leading
>> up to 64 KB scaled appropriately. 128 and 512 were the same or
>> slightly worse than 64, so I think 64 KB is the "best value".
>
> Sounds good, but it is even more important that you have explained *why*
> below.
>
>
>> On 30 March 2016 at 21:29, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>> One thing you need to keep in mind with all this is that the above
>>> macros *does not* configure the network I/O buffers.
>
>> I don't think this is quite true - I don't think it's intentional, but
>> I am lead to believe that HTTP_REQBUF_SZ does influence network IO
>> buffers in some way. See below.
>
> You are probably both right: HTTP_REQBUF_SZ influences network I/O but
> not necessarily all network I/O buffer capacities. In other words,
> increasing HTTP_REQBUF_SZ improves uncachable miss performance until
> HTTP_REQBUF_SZ matches the size of the second smallest buffer (or I/O
> size) in the buffer chain. In your test, that limit was read_ahead_gap.
>
>
>>> In the long-term plan those internal uses will be replaced by SBuf
>
> ... or MemBlob or something that does not exist yet but uses MemBlob.
>
> SBuf it trying to provide "everything to everybody" and, naturally,
> becomes a bad choice in some situations with strict requirements (e.g.,
> when one has to preserve raw buffer pointer but does not want single
> buffer content ownership).
>
>
>> Looking purely at system calls, it shows the reads from the upstream
>> server are being read in 16 KB chunks, where as writes to the client
>> are done in 4 KB chunks. With the patch, the writes to the client
>> increase to 16 KB, so it appears that HTTP_REQBUF_SZ does influence
>> network IO in this way.
>
> Naturally: One cannot write using 16 KB chunks when the data goes
> through a 4KB buffer. Please note that the position of that 4KB buffer
> in the buffer chain is almost irrelevant -- the weakest link in the
> chain determines "bytes pipelining" or "bytes streaming" speed.
>
>
>
>> However post-patch the improvement is quite
>> substantial, as the write(2) calls are now using the full 64 KB
>> buffer.
>
> This is an important finding! It shows that read_ahead_gap documentation
> lies or at least misleads: Evidently, that directive controls not just
> accumulation of data but [maximum] network read sizes, at least for
> HTTP. Please fix that documentation in your next patch revision.

Done.

>
>
>> At this stage, I'm not entirely sure what the best course of action
>> is. I'm happy to investigate things further, if people have
>> suggestions. read_ahead_gap appears to influence downstream write
>> buffer sizes, at least up to the maximum of HTTP_REQBUF_SZ.
>
> In other words, Squid Client (http.cc and clients/*) is able to give
> Squid Server (client_side*cc and servers/*) the results of a single
> Client network read (i.e., a single read from the origin server of
> peer). The result is copied to the ClientStream StoreIOBuffer. Thus, if
> we do not want to slow Squid down artificially, the StoreIOBuffer
> capacity should match the maximum Client read I/O size.
>
> Do we use that StoreIOBuffer to write(2) to the HTTP client? If not,
> what controls the I/O buffer size for writing to the HTTP client?

We do yes - that buffer makes it through
client_side.cc:clientSocketRecipient, Http::One::Server::handleReply,
and finally Http::Stream::sendBody, where the actual write is
performed.


>
>Http::One::Server::handleReply
>> It would
>> be nice if that buffer size was independently run-time configurable
>
> Actually, it would be nice if the buffer sizes just matched, avoiding
> both artificial slowdown and the need for care

Re: [squid-dev] [PATCH] Increase request buffer size to 64kb

2016-03-31 Thread Nathan Hoad
I've attached two patches - they're functionally identical, one uses
SBuf and the other using MemBuf. The patch achieves the following:

 - Moves all of Http::Stream's uses of StoreIOBuffer objects into a
method, as done previously.
 - Adjusts the read_ahead_gap default to 64 KB as Amos suggested.
 - Replaces Http::Stream::reqbuf with a SBuf/MemBuf.
 - Make the buffer member private.
 - Adds a new configuration option for the size of the buffer, with a
default of 64 KB, in the spirit of the original patch.

I'm not convinced that http_stream_buffer_size is the best name, but I
can't think of anything better. The patches are also abusing
MemBuf/SBuf - they're being used purely to allow configuration of the
buffer size, they're not being used as they normally are, or growing
dynamically. That would involve quite a few more changes, I think.

Thank you,

Nathan.

On 1 April 2016 at 03:20, Alex Rousskov
 wrote:
> On 03/31/2016 04:45 AM, Amos Jeffries wrote:
>> PS. If you like I will take your current patch and merge it tomorrow.
>
> Please do not do that until others have a chance to review the results.
>
> Alex.
>
Make the size of the buffer on Http::Stream configurable.

This work is submitted on behalf of Bloomberg L.P.
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2016-03-12 20:27:35 +
+++ src/SquidConfig.h	2016-04-01 00:57:42 +
@@ -80,6 +80,7 @@
 int64_t max;
 } quickAbort;
 int64_t readAheadGap;
+int64_t httpStreamBufferSize;
 RemovalPolicySettings *replPolicy;
 RemovalPolicySettings *memPolicy;
 #if USE_HTTP_VIOLATIONS

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2016-03-12 20:27:35 +
+++ src/cf.data.pre	2016-04-01 00:57:42 +
@@ -5594,12 +5594,22 @@
 COMMENT: buffer-size
 TYPE: b_int64_t
 LOC: Config.readAheadGap
-DEFAULT: 16 KB
+DEFAULT: 64 KB
 DOC_START
 	The amount of data the cache will buffer ahead of what has been
 	sent to the client when retrieving an object from another server.
 DOC_END
 
+NAME: http_stream_buffer_size
+COMMENT: buffer-size
+TYPE: b_int64_t
+LOC: Config.httpStreamBufferSize
+DEFAULT: 64 KB
+DOC_START
+	The buffer size that the cache will use when streaming response objects to
+	clients.
+DOC_END
+
 NAME: negative_ttl
 IFDEF: USE_HTTP_VIOLATIONS
 COMMENT: time-units

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2016-03-10 06:55:17 +
+++ src/client_side.cc	2016-04-01 00:57:42 +
@@ -1012,9 +1012,7 @@
 http->uri = xstrdup(uri);
 setLogUri (http, uri);
 auto *context = new Http::Stream(clientConnection, http);
-StoreIOBuffer tempBuffer;
-tempBuffer.data = context->reqbuf;
-tempBuffer.length = HTTP_REQBUF_SZ;
+StoreIOBuffer tempBuffer = context->getStoreIOBuffer();
 clientStreamInit(>client_stream, clientGetMoreData, clientReplyDetach,
  clientReplyStatus, new clientReplyContext(http), clientSocketRecipient,
  clientSocketDetach, context, tempBuffer);
@@ -1359,9 +1357,7 @@
 http->req_sz = hp->messageHeaderSize();
 Http::Stream *result = new Http::Stream(csd->clientConnection, http);
 
-StoreIOBuffer tempBuffer;
-tempBuffer.data = result->reqbuf;
-tempBuffer.length = HTTP_REQBUF_SZ;
+StoreIOBuffer tempBuffer = result->getStoreIOBuffer();
 
 ClientStreamData newServer = new clientReplyContext(http);
 ClientStreamData newClient = result;

=== modified file 'src/http/Stream.cc'
--- src/http/Stream.cc	2016-01-24 17:41:43 +
+++ src/http/Stream.cc	2016-04-01 00:57:42 +
@@ -7,6 +7,7 @@
  */
 
 #include "squid.h"
+#include "SquidConfig.h"
 #include "client_side_request.h"
 #include "http/Stream.h"
 #include "HttpHdrContRange.h"
@@ -23,11 +24,12 @@
 connRegistered_(false)
 {
 assert(http != nullptr);
-memset(reqbuf, '\0', sizeof (reqbuf));
 flags.deferred = 0;
 flags.parsed_ok = 0;
 deferredparams.node = nullptr;
 deferredparams.rep = nullptr;
+
+requestBuffer.reserveCapacity(Config.httpStreamBufferSize);
 }
 
 Http::Stream::~Stream()
@@ -109,12 +111,10 @@
 debugs(33, 5, reply << " written " << http->out.size << " into " << clientConnection);
 
 /* More data will be coming from the stream. */
-StoreIOBuffer readBuffer;
+StoreIOBuffer readBuffer = getStoreIOBuffer();
 /* XXX: Next requested byte in the range sequence */
 /* XXX: length = getmaximumrangelenfgth */
 readBuffer.offset = getNextRangeOffset();
-readBuffer.length = HTTP_REQBUF_SZ;
-readBuffer.data = reqbuf;
 /* we may note we have reached the end of the wanted ranges */
 clientStreamRead(getTail(), http, readBuffer);
 }
@@ -568,6 +568,16 @@
 deferredparams.queuedBuffer = receivedData;
 }
 
+StoreIOBuffer
+Http::Stream::getStoreIOBuffer()
+{
+StoreIOBuffer tempBuffer;
+tempBuffer.data = this->requestBuffer.rawSpace(Config.httpStreamBufferSize);
+tempBuffer.length = this->requestBuffer.spaceSize();
+

Re: [squid-dev] [PATCH] Increase request buffer size to 64kb

2016-03-30 Thread Nathan Hoad
Responding to both emails, including my findings, so apologies in
advance for the extremely long email.

I've gone through the places that use HTTP_REQBUF_SZ, and it seems to
be Http::Stream::pullData() that's benefiting from this change. To
simplify all Http::Stream-related uses of HTTP_REQBUF_SZ, I've
attached a work-in-progress patch that unifies them all into a method
on Http::Stream and increases only its buffer size, so people are
welcome to try and replicate my findings.

Alex, I've tried 8, 16, 32, 128 and 512 KB values - all sizes leading
up to 64 KB scaled appropriately. 128 and 512 were the same or
slightly worse than 64, so I think 64 KB is the "best value".

My page size and kernel buffer sizes are both stock - I have not
tweaked anything on this machine.

$ uname -a
Linux nhoad-laptop 4.4.3-1-ARCH #1 SMP PREEMPT Fri Feb 26 15:09:29 CET
2016 x86_64 GNU/Linux
$ getconf PAGESIZE
4096
$ cat /proc/sys/net/ipv4/tcp_wmem /proc/sys/net/ipv4/tcp_rmem
409616384   4194304
409687380   6291456

The buffer size on Http::Stream does not grow dynamically, it is a
simple char[HTTP_REQBUF_SZ]. I could look into making it grow
dynamically if we're interested in that, but it would be a lot of work
(to me - feel free to suggest somewhere else this is done and I can
try to learn from that). I can't definitively say that increasing this
constant has no impact on smaller objects, however using Apache bench
indicated no impact in performance, maintaining ~6k requests a second
pre- and post-patch for a small uncached object.

Amos, replies inline.

On 30 March 2016 at 21:29, Amos Jeffries  wrote:
> On 30/03/2016 6:53 p.m., Alex Rousskov wrote:
>
> One thing you need to keep in mind with all this is that the above
> macros *does not* configure the network I/O buffers.

I don't think this is quite true - I don't think it's intentional, but
I am lead to believe that HTTP_REQBUF_SZ does influence network IO
buffers in some way. See below.

> The network HTTP request buffer is controlled by request_header_max_size
> - default 64KB.
>
> The network HTTP reply buffer is controlled by reply_header_max_size -
> default 64KB.
>
> The HTTP_REQBUF_SZ macro configures the StoreIOBuffer object size. Which
> is mostly used for StoreIOBuffer (client-streams or disk I/O) or local
> stack allocated variables. Which is tuned to match the filesystem page
> size - default 4KB.
>
> If your system uses non-4KB pages for disk I/O then you should tune that
> alignment of course. If you are memory-only caching or even not caching
> that object at all - then the memory page size will be the more
> important metric to tune it against.

As shown above, I have 4 KB pages for my memory page size. There is no
disk cache configured, so disk block size should be irrelevant I think
- see the end of this mail for the squid.conf I've been using for this
testing. I also don't have a memory cache configured, so the default
of 256 MB is being used. Seeing as the object I'm testing is a 5 GB
file, I don't think the memory cache should be coming into play. To be
sure, I did also run with `cache_mem none`.

>
> How important I'm not sure. I had thought the relative difference in
> memory and network I/O speeds made the smaller size irrelevant (since we
> are data-copying from the main network SBuf buffers anyway). But
> perhapse not. You may have just found that it needs to be tuned to match
> the network I/O buffer default max-size (64KB).
>
> NP: perhapse the real difference is how fast Squid can walk the list of
> in-memory buffers that span the object in memory cache. Since it walks
> the linked-list from head to position N with each write(2) having larger
> steps would be relevant.

Where in the code is this walking done? Investigating this would be
helpful I think.

> Make sure you have plenty of per-process stack space available before
> going large. Squid allocates several buffers using this size directly on
> the stack. Usually at least 2, maybe a half dozen.

Ensuring I'm being explicit here, in all my testing I haven't messed
with stack sizes, again using the default on my system, which is:

Max stack size8388608  unlimitedbytes

Which seems to have been enough, I think? What would I see if I had
run out of stack space? A crash?

>
>
> It would be page size (memory pages or disk controller I/O pages). Since
> the network is tuned already and defaulting to 64KB.
>
>
>
> It is used primarily for the disk I/O and Squid internal client-streams
> buffers.
>
> In the long-term plan those internal uses will be replaced by SBuf which
> are controlled by the existing squid.conf options and actual message
> sizes more dynamically.
>
> A new option for tuning disk I/O buffer size might be useful in both
> long- and short- terms though.
>
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> 

Re: [squid-dev] [PATCH] Increase request buffer size to 64kb

2016-03-29 Thread Nathan Hoad
On 30 March 2016 at 12:11, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 03/29/2016 06:06 PM, Nathan Hoad wrote:
>
>> This (very small) patch increases the request buffer size to 64kb, from 4kb.
>
>> -#define HTTP_REQBUF_SZ  4096
>> +#define HTTP_REQBUF_SZ  65535
>
>
>> In my testing, this increases throughput rather dramatically for
>> downloading large files:
>
> Based on your results alone, it is clear that the patch does more than
> "increases the request buffer size"(*). IMO, the important initial
> questions we need to answer are:
>
>   1. Why the performance is improved in this micro test?
>   2. What are the expected effects on other traffic?

I think these are reasonable questions to ask. Answering question one
will take time - I'll have to go through the uses of the constant and
modify them individually, to expose the specific code that has
benefited from this change.

I'm not sure how I can answer question two definitively - are there
any accepted benchmarking/test suites I can use to test the change on
different types of traffic, to see if particular situations have been
negatively affected?

>
> Increasing a buffer sizes will naturally improve performance on a
> micro-test that targets that buffer(*), but we should know more before
> deciding whether this increase is _generally_ useful and whether there
> is a better way to achieve similar (or better) performance results.
>
>
> Please note that this message is not meant to disparage your email or
> even the proposed change itself! I am just trying to explain what
> additional information is needed in order to start making an _informed_
> decision about the proposed change. Needless to say, many changes have
> been committed without proper disclosures and analysis, but we should
> strive to do better.
>
>
> Thank you,
>
> Alex.
> P.S. (*) The changed constant _name_ has nothing to do with the micro
> test: download vs. "request buffer size". IIRC, the constant itself is
> misnamed/misused, but that tidbit may be just the tip of the iceberg.

Yes I think you are right in that the constant may be
misnamed/misused. Once I've gone through all the places that use it, I
suppose reevaluating its name would be a good idea.

Thank you,

Nathan.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Add reply_header_add

2016-03-28 Thread Nathan Hoad
Hello,

Attached is a patch with the changes recommended by Amos.

Thank you,

Nathan.

On 25 March 2016 at 04:29, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 17/03/2016 5:25 p.m., Nathan Hoad wrote:
>> On 17 March 2016 at 13:33, Alex Rousskov
>> <rouss...@measurement-factory.com> wrote:
>>> On 03/16/2016 05:40 PM, Nathan Hoad wrote:
>>>
>>>> I've opted to remove Config2.onoff.mangle_request_headers completely.
>>>
>>> Even better! I did not realize it is not a "real" configuration option
>>> but a silly(?) cache for "Config.request_header_access != NULL".
>>>
>>>
>>>> -httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, int req_or_rep)
>>>> +httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, HeaderManglers 
>>>> *hms, int req_or_rep)
>>>
>>> I do not think you need/use the last parameter, but its removal can be
>>> done when committing your patch.
>>
>> Good catch! To lessen work for the committer, I've attached a version
>> of the patch with that change.
>>
>
>
> in src/HttpHeaderTools.cc:
> * you dont have to compare pointers with "!= nullptr" or "== nullptr".
>
> in src/enums.h:
>
> * since you are touching req_or_rep enum anyway please move it to
> HttpHeaderTools.h
>  - then you can remove the #include for enums.h again
>
> in src/cf.data.pre:
>
> * use " [ acl ... ] " instead of "[acl1] ..."
>
>
> Thanks
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
Implement reply_header_add.

This work is submitted on behalf of Bloomberg L.P.

=== modified file 'src/HttpHeaderTools.cc'
--- src/HttpHeaderTools.cc	2016-01-24 17:41:43 +
+++ src/HttpHeaderTools.cc	2016-03-28 21:44:35 +
@@ -40,6 +40,7 @@
 #include 
 
 static void httpHeaderPutStrvf(HttpHeader * hdr, Http::HdrType id, const char *fmt, va_list vargs);
+static void httpHdrAdd(HttpHeader *heads, HttpRequest *request, const AccessLogEntryPointer , HeaderWithAclList );
 
 void
 httpHeaderMaskInit(HttpHeaderMask * mask, int value)
@@ -267,29 +268,12 @@
  * \retval 1Header has no access controls to test
  */
 static int
-httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, int req_or_rep)
+httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, HeaderManglers *hms)
 {
 int retval;
 
-/* check with anonymizer tables */
-HeaderManglers *hms = NULL;
 assert(e);
 
-if (ROR_REQUEST == req_or_rep) {
-hms = Config.request_header_access;
-} else if (ROR_REPLY == req_or_rep) {
-hms = Config.reply_header_access;
-} else {
-/* error. But let's call it "request". */
-hms = Config.request_header_access;
-}
-
-/* manglers are not configured for this message kind */
-if (!hms) {
-debugs(66, 7, "no manglers configured for message kind " << req_or_rep);
-return 1;
-}
-
 const headerMangler *hm = hms->find(*e);
 
 /* mangler or checklist went away. default allow */
@@ -323,18 +307,40 @@
 
 /** Mangles headers for a list of headers. */
 void
-httpHdrMangleList(HttpHeader * l, HttpRequest * request, int req_or_rep)
+httpHdrMangleList(HttpHeader *l, HttpRequest *request, const AccessLogEntryPointer , req_or_rep_t req_or_rep)
 {
 HttpHeaderEntry *e;
 HttpHeaderPos p = HttpHeaderInitPos;
 
-int headers_deleted = 0;
-while ((e = l->getEntry()))
-if (0 == httpHdrMangle(e, request, req_or_rep))
-l->delAt(p, headers_deleted);
-
-if (headers_deleted)
-l->refreshMask();
+/* check with anonymizer tables */
+HeaderManglers *hms = nullptr;
+
+HeaderWithAclList *headersAdd = nullptr;
+
+switch (req_or_rep) {
+case ROR_REQUEST:
+hms = Config.request_header_access;
+headersAdd = Config.request_header_add;
+break;
+case ROR_REPLY:
+hms = Config.reply_header_access;
+headersAdd = Config.reply_header_add;
+break;
+}
+
+if (hms) {
+int headers_deleted = 0;
+while ((e = l->getEntry()))
+if (0 == httpHdrMangle(e, request, hms))
+l->delAt(p, headers_deleted);
+
+if (headers_deleted)
+l->refreshMask();
+}
+
+if (headersAdd && !headersAdd->empty()) {
+httpHdrAdd(l, request, al, *headersAdd);
+}
 }
 
 static

=== modified file 'src/HttpHeaderTools.h'
--- src/HttpHeaderTools.h	2016-01-01 00:12:18 +
+++ src/HttpHeaderTools.h	2016-03-28 21:46:06 +
@@ -29,6 +29,13 @@
 
 typedef std::list HeaderWithAclList;
 
+/* Distinguish 

Re: [squid-dev] [PATCH] Add reply_header_add

2016-03-19 Thread Nathan Hoad
Hello,

Attached is an updated patch with Alex's suggested changes, i.e.
moving the use of httpHdrAdd into httpHdrMangleList, and moving some
code out of httpHdrMangle into httpHdrMangleList.

I've opted to remove Config2.onoff.mangle_request_headers completely.
I found keeping it around and making use of it in httpHdrMangleList
made the flow of execution confusing, with regards to ensuring
httpHdrAdd was still called.

Thank you for taking the time to look at this.

Nathan.

On 16 March 2016 at 14:15, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 03/15/2016 07:29 PM, Nathan Hoad wrote:
>> On 15 March 2016 at 12:11, Alex Rousskov wrote:
>>> On 03/14/2016 05:46 PM, Nathan Hoad wrote:
>
>>> * You have not adjusted HTTP response headers produced by
>>> Http::One::Server::writeControlMsgAndCall(). Please either apply the
>>> same logic to those headers or explicitly document that control message
>>> headers are out of this option reach.
>
>> Done.
>
>
> Thank you. After those improvements, you can probably see a new pattern
> emerging:
>
>> if (Config2.onoff.mangle_request_headers)
>> httpHdrMangleList(hdr_out, request, ROR_REQUEST);
>>
>> httpHdrAdd(hdr_out, request, al, Config.request_header_add);
>
> and
>
>
>> httpHdrMangleList(>header, http->request, ROR_REPLY);
>> httpHdrAdd(>header, http->request, http->al, Config.reply_header_add);
>
> and
>
>> httpHdrMangleList(hdr, request, ROR_REPLY);
>>
>> httpHdrAdd(hdr, request, http->al, Config.reply_header_add);
>
> and
>
>> $ bzr grep 'httpHdrMangleList' | wc -l
>> 3
>
> and, I would imagine,
>
>> bzr grep 'httpHdrAdd' | wc -l
>> 3
>
>
> Yes, there are only three cases, and in all of them, we apply the same
> set of two operations. Thus, you can now move the httpHdrAdd() call into
> httpHdrMangleList()!
>
> If you do that, move the code that gets the right Config.*_header_access
> mangler from the beginning of httpHdrMangle() to httpHdrMangleList()
> itself and adjust it to also check Config2.onoff.mangle_request_headers
> during ROR_REQUEST.
>
> Adjust as needed: httpHdrMangle() is a private/static function and you
> will be working with all three httpHdrMangleList() calls in existence
> (AFAICT).
>
>
> I cannot require these additional changes from you, but if you need an
> extra incentive/motivation, then just look at the loop in
> httpHdrMangleList() and what happens at the start of httpHdrMangle()
> that the loop calls on every iteration. Consider the common case of no
> Config.*_header_access manglers configured. See how httpHdrMangleList()
> always iterates through all header fields while repeating the same
> checks and doing nothing useful at all?
>
> With the above changes (that move those checks to be done once, in front
> of the loop, and that will entirely eliminate looping in the common
> case), you would be making Squid a tiny bit faster despite adding a new
> feature!
>
>
> Thank you,
>
> Alex.
>
=== modified file 'src/HttpHeaderTools.cc'
--- src/HttpHeaderTools.cc	2016-01-24 17:41:43 +
+++ src/HttpHeaderTools.cc	2016-03-16 05:23:58 +
@@ -40,6 +40,7 @@
 #include 
 
 static void httpHeaderPutStrvf(HttpHeader * hdr, Http::HdrType id, const char *fmt, va_list vargs);
+static void httpHdrAdd(HttpHeader *heads, HttpRequest *request, const AccessLogEntryPointer , HeaderWithAclList );
 
 void
 httpHeaderMaskInit(HttpHeaderMask * mask, int value)
@@ -267,29 +268,12 @@
  * \retval 1Header has no access controls to test
  */
 static int
-httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, int req_or_rep)
+httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, HeaderManglers *hms, int req_or_rep)
 {
 int retval;
 
-/* check with anonymizer tables */
-HeaderManglers *hms = NULL;
 assert(e);
 
-if (ROR_REQUEST == req_or_rep) {
-hms = Config.request_header_access;
-} else if (ROR_REPLY == req_or_rep) {
-hms = Config.reply_header_access;
-} else {
-/* error. But let's call it "request". */
-hms = Config.request_header_access;
-}
-
-/* manglers are not configured for this message kind */
-if (!hms) {
-debugs(66, 7, "no manglers configured for message kind " << req_or_rep);
-return 1;
-}
-
 const headerMangler *hm = hms->find(*e);
 
 /* mangler or checklist went away. default allow */
@@ -323,18 +307,40 @@
 
 /** Mangles headers for a list of headers. */
 void
-httpHdrMangleList(HttpHeader * l, HttpRequest * request, int req_or_rep)
+httpHdrMangleList(HttpHeader *l, HttpRequest *request, const AccessLogEntryPointer , req_or_rep_t req_or_rep)
 {
 Htt

Re: [squid-dev] [PATCH] Add reply_header_add

2016-03-19 Thread Nathan Hoad
On 17 March 2016 at 13:33, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 03/16/2016 05:40 PM, Nathan Hoad wrote:
>
>> I've opted to remove Config2.onoff.mangle_request_headers completely.
>
> Even better! I did not realize it is not a "real" configuration option
> but a silly(?) cache for "Config.request_header_access != NULL".
>
>
>> -httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, int req_or_rep)
>> +httpHdrMangle(HttpHeaderEntry * e, HttpRequest * request, HeaderManglers 
>> *hms, int req_or_rep)
>
> I do not think you need/use the last parameter, but its removal can be
> done when committing your patch.

Good catch! To lessen work for the committer, I've attached a version
of the patch with that change.

>
> +1

Thank you!

>
>
> BTW, do you know whether ACLs are required for HeaderWithAclList (i.e.,
> our "ACLs may be specified" wording is misleading) or optional (i.e.,
> our "Usage: ... field-value acl1 [acl2]" is wrong)?

The usage statement was wrong, as it turns out. I tested the following
configuration:

request_header_add Test-Header "true"
reply_header_add Test-Header "true"

And the headers are attached to the HTTP messages, so ACLs are indeed
optional. In the attached patch I've also updated the Usage statement
for both request_header_add and reply_header_add to reflect reality.

Thank you,

Nathan.


>
>
> Thank you,
>
> Alex.
>
>
>> On 16 March 2016 at 14:15, Alex Rousskov
>> <rouss...@measurement-factory.com> wrote:
>>> On 03/15/2016 07:29 PM, Nathan Hoad wrote:
>>>> On 15 March 2016 at 12:11, Alex Rousskov wrote:
>>>>> On 03/14/2016 05:46 PM, Nathan Hoad wrote:
>>>
>>>>> * You have not adjusted HTTP response headers produced by
>>>>> Http::One::Server::writeControlMsgAndCall(). Please either apply the
>>>>> same logic to those headers or explicitly document that control message
>>>>> headers are out of this option reach.
>>>
>>>> Done.
>>>
>>>
>>> Thank you. After those improvements, you can probably see a new pattern
>>> emerging:
>>>
>>>> if (Config2.onoff.mangle_request_headers)
>>>> httpHdrMangleList(hdr_out, request, ROR_REQUEST);
>>>>
>>>> httpHdrAdd(hdr_out, request, al, Config.request_header_add);
>>>
>>> and
>>>
>>>
>>>> httpHdrMangleList(>header, http->request, ROR_REPLY);
>>>> httpHdrAdd(>header, http->request, http->al, Config.reply_header_add);
>>>
>>> and
>>>
>>>> httpHdrMangleList(hdr, request, ROR_REPLY);
>>>>
>>>> httpHdrAdd(hdr, request, http->al, Config.reply_header_add);
>>>
>>> and
>>>
>>>> $ bzr grep 'httpHdrMangleList' | wc -l
>>>> 3
>>>
>>> and, I would imagine,
>>>
>>>> bzr grep 'httpHdrAdd' | wc -l
>>>> 3
>>>
>>>
>>> Yes, there are only three cases, and in all of them, we apply the same
>>> set of two operations. Thus, you can now move the httpHdrAdd() call into
>>> httpHdrMangleList()!
>>>
>>> If you do that, move the code that gets the right Config.*_header_access
>>> mangler from the beginning of httpHdrMangle() to httpHdrMangleList()
>>> itself and adjust it to also check Config2.onoff.mangle_request_headers
>>> during ROR_REQUEST.
>>>
>>> Adjust as needed: httpHdrMangle() is a private/static function and you
>>> will be working with all three httpHdrMangleList() calls in existence
>>> (AFAICT).
>>>
>>>
>>> I cannot require these additional changes from you, but if you need an
>>> extra incentive/motivation, then just look at the loop in
>>> httpHdrMangleList() and what happens at the start of httpHdrMangle()
>>> that the loop calls on every iteration. Consider the common case of no
>>> Config.*_header_access manglers configured. See how httpHdrMangleList()
>>> always iterates through all header fields while repeating the same
>>> checks and doing nothing useful at all?
>>>
>>> With the above changes (that move those checks to be done once, in front
>>> of the loop, and that will entirely eliminate looping in the common
>>> case), you would be making Squid a tiny bit faster despite adding a new
>>> feature!
>>>
>>>
>>> Thank you,
>>>
>>> Alex.
>>>
>
Implement reply_header_add.

Thi

Re: [squid-dev] [PATCH] Add reply_header_add

2016-03-15 Thread Nathan Hoad
On 15 March 2016 at 12:11, Alex Rousskov
<rouss...@measurement-factory.com> wrote:
> On 03/14/2016 05:46 PM, Nathan Hoad wrote:
>
>> The attached patch implements reply_header_add, for adding HTTP
>> headers to reply objects as they're sent to the client.
>
> Thank you for this useful addition. Unfortunately, it needs quite a bit
> of work.
>
>
> * Please _carefully_ review your src/cf.data.pre changes. There appear
> to be many copy-paste errors there, some of which seriously misleading
> (e.g., "incoming" vs. "outgoing" and "request" vs. "response").

My bad! Good catch.

>
> * You have not adjusted HTTP response headers produced by
> Http::One::Server::writeControlMsgAndCall(). Please either apply the
> same logic to those headers or explicitly document that control message
> headers are out of this option reach.

Done.

>
> * You have not adjusted HTTP response headers produced by
> tunnelConnected() and ClientHttpRequest::sslBumpStart(). Please either
> apply the same logic to those headers or explicitly document that
> successful CONNECT responses are out of this option reach.

I've opted to document that CONNECT responses aren't covered by this
option. Making it work in these places is outside my capabilities.

>
>
> I also recommend the following adjustments:
>
> * The scary "In theory, all of ..." paragraph that made a lot of sense
> for requests is barely applicable to post-cache responses, where most
> information is already available. Please consider making that text
> context-neutral and moving it to an option-independent location in the
> begging of squid.conf. That would be really useful IMO because we have
> many options using logformat codes now and that old text is applicable,
> to various degrees, to all of them.

Sure, I've added this at the top of the documentation, after SMP
Macros, and removed the paragraphs from both request_header_add and
reply_header_add. I'm not quite sure of the wording, feel free to make
suggestions for how it could be improved.

>
> * Adding a sentence or two explicitly stating that this directive does
> not affect cached responses. Your copied text says that it "has no
> affect on cache hit detection", and that is also true, if somewhat
> redundant for responses.

Done.

>
> * Adding a "See also: request_header_add" statement and the symmetrical
> statement for request_header_add.

Done.

>
>
>
>> +if (Config.reply_header_add && !Config.reply_header_add->empty())
>> +httpHdrAdd(hdr, request, http->al, *Config.reply_header_add);
>
>
> I know you are reusing an existing code template here, but this
> particular copy starts duplicating a lot of logic. I recommend moving
> both guards/conditions from the if-statement into httpHdrAdd() itself.
> If others object to moving both conditions on performance grounds,
> please move at least the second/empty() condition:
>
> That is, ideally:
>
>   httpHdrAdd(hdr, request, http->al, Config.reply_header_add);
>
> Or, if others object:
>
>   if (Config.reply_header_add)
>   httpHdrAdd(hdr, request, http->al, *Config.reply_header_add);

I've gone with your ideal version. If people object, I'm happy to move
to the second version.

>
>
>
> Thank you,
>
> Alex.
>


Thank you,

Nathan.
=== modified file 'src/HttpHeaderTools.cc'
--- src/HttpHeaderTools.cc	2016-01-24 17:41:43 +
+++ src/HttpHeaderTools.cc	2016-03-16 00:31:23 +
@@ -462,11 +462,14 @@
 }
 
 void
-httpHdrAdd(HttpHeader *heads, HttpRequest *request, const AccessLogEntryPointer , HeaderWithAclList )
+httpHdrAdd(HttpHeader *heads, HttpRequest *request, const AccessLogEntryPointer , HeaderWithAclList *headersAdd)
 {
+if (headersAdd == NULL || headersAdd->empty())
+return;
+
 ACLFilledChecklist checklist(NULL, request, NULL);
 
-for (HeaderWithAclList::const_iterator hwa = headersAdd.begin(); hwa != headersAdd.end(); ++hwa) {
+for (HeaderWithAclList::const_iterator hwa = headersAdd->begin(); hwa != headersAdd->end(); ++hwa) {
 if (!hwa->aclList || checklist.fastCheck(hwa->aclList) == ACCESS_ALLOWED) {
 const char *fieldValue = NULL;
 MemBuf mb;

=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2016-03-12 20:27:35 +
+++ src/SquidConfig.h	2016-03-14 23:38:41 +
@@ -463,6 +463,8 @@
 HeaderManglers *reply_header_access;
 ///request_header_add access list
 HeaderWithAclList *request_header_add;
+///reply_header_add access list
+HeaderWithAclList *reply_header_add;
 ///note
 Notes notes;
 char *coredump_dir;

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2016-03-12 20:27:35 +
+++ src/cf.data.pre	2016-03-16 00:

[squid-dev] [PATCH] Add reply_header_add

2016-03-14 Thread Nathan Hoad
Hello,

The attached patch implements reply_header_add, for adding HTTP
headers to reply objects as they're sent to the client.

This work is submitted on behalf of Bloomberg L.P.

Thank you,

Nathan.
Implement reply_header_add, the HTTP reply equivalent of request_header_add.

This work is submitted on behalf of Bloomberg L.P.
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2016-03-12 20:27:35 +
+++ src/SquidConfig.h	2016-03-14 23:38:41 +
@@ -463,6 +463,8 @@
 HeaderManglers *reply_header_access;
 ///request_header_add access list
 HeaderWithAclList *request_header_add;
+///reply_header_add access list
+HeaderWithAclList *reply_header_add;
 ///note
 Notes notes;
 char *coredump_dir;

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2016-03-12 20:27:35 +
+++ src/cf.data.pre	2016-03-14 23:38:41 +
@@ -6102,6 +6102,46 @@
 	only.
 DOC_END
 
+NAME: reply_header_add
+TYPE: HeaderWithAclList
+LOC: Config.reply_header_add
+DEFAULT: none
+DOC_START
+	Usage:   reply_header_add field-name field-value acl1 [acl2] ...
+	Example: reply_header_add X-Client-CA "CA=%ssl::>cert_issuer" all
+
+	This option adds header fields to incoming HTTP responses (i.e., response
+	headers delivered by Squid to the client). The option has no effect during
+	cache hit detection. The equivalent adaptation vectoring point in ICAP
+	terminology is post-cache RESPMOD.
+
+	Field-name is a token specifying an HTTP header name. If a
+	standard HTTP header name is used, Squid does not check whether
+	the new header conflicts with any existing headers or violates
+	HTTP rules. If the request to be modified already contains a
+	field with the same name, the old field is preserved but the
+	header field values are not merged.
+
+	Field-value is either a token or a quoted string. If quoted
+	string format is used, then the surrounding quotes are removed
+	while escape sequences and %macros are processed.
+
+	In theory, all of the logformat codes can be used as %macros.
+	However, unlike logging (which happens at the very end of
+	transaction lifetime), the transaction may not yet have enough
+	information to expand a macro when the new header value is needed.
+	And some information may already be available to Squid but not yet
+	committed where the macro expansion code can access it (report
+	such instances!). The macro will be expanded into a single dash
+	('-') in such cases. Not all macros have been tested.
+
+	One or more Squid ACLs may be specified to restrict header
+	injection to matching requests. As always in squid.conf, all
+	ACLs in an option ACL list must be satisfied for the insertion
+	to happen. The request_header_add option supports fast ACLs
+	only.
+DOC_END
+
 NAME: note
 TYPE: note
 LOC: Config.notes

=== modified file 'src/client_side_reply.cc'
--- src/client_side_reply.cc	2016-03-11 18:00:51 +
+++ src/client_side_reply.cc	2016-03-14 23:38:41 +
@@ -1303,6 +1303,9 @@
 return result;
 }
 
+//Declared in HttpHeaderTools.cc
+void httpHdrAdd(HttpHeader *heads, HttpRequest *request, const AccessLogEntryPointer , HeaderWithAclList _add);
+
 /**
  * Generate the reply headers sent to client.
  *
@@ -1579,6 +1582,9 @@
 }
 
 httpHdrMangleList(hdr, request, ROR_REPLY);
+
+if (Config.reply_header_add && !Config.reply_header_add->empty())
+httpHdrAdd(hdr, request, http->al, *Config.reply_header_add);
 }
 
 void

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PREVIEW] Free AccessLogEntry::url when needed

2016-03-14 Thread Nathan Hoad
Hello,

Attached is a first attempt at converting the AccessLogEntry::url
member to an SBuf. It's definitely resulted in more data copies - just
about all sources are still char *.

I'm not sure how I can actually avoid those, aside from converting
them to SBuf as well, which would mean modifying HTCP and ICP logging,
and ClientSideRequest::log_uri. I can take a crack at
ClientSideRequest::log_uri if people are interested, but I'm not in a
position to test any changes made to HTCP and ICP logging.

Thank you,

Nathan.

On 11 March 2016 at 16:21, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 11/03/2016 12:53 p.m., Nathan Hoad wrote:
>> Hello,
>>
>> The attached patch is a demonstration of a memory leak on
>> AccessLogentry::url, created by ACLFilledChecklist::syncAle(). The
>> patch itself is not good enough quality to warrant acceptance, and no
>> doubt introduces bugs.
>>
>> ::syncAle() is the only place in the codebase that assigns a URL that
>> AccessLogEntry is expected to free(), which AccessLogEntry doesn't do.
>> This results in a memory leak.
>>
>> As mentioned, the patch is of questionable quality, as the chunk in
>> src/client_side.cc shows. This sort of change would have to be made
>> everywhere that url is assigned to, which isn't practical.
>>
>> It would be good to modify ::syncAle() so that it doesn't allocate a
>> new URL, but I'm not sure how that could be achieved. I think the only
>> workable alternative is to change AccessLogEntry::url to a more
>> intelligent string type. There is also the possibility of making the
>> member private and using a setter to ensure it is always xstrdup'd
>> from another URL.
>>
>> If someone with better knowledge than me would like to make a
>> recommendation on what to do here, I'd be happy to work towards a
>> better solution.
>
> You can probably guess by now that I'm going to say SBuf.
>
> In this case there is also possibly the option of removing that url
> variable entirely. A class URL is the proper way to store URLs. But that
> could be significantly more work to make URL ref-countable with a Pointer.
>
> FYI:
>  SBuf is the intended solution to all our many string related problems.
> We are just in the transitional period of converting things.
>
> One issue we are working around during the transition time is that SBuf
> data-copies when importing from any non-SBuf type, and when c_str() is
> called to export. Effort to avoid those is worthwhile. Use in the wrong
> places can reduce performance noticably.
>  However, any object which is already being allocated or copied to the
> string/char* can be SBuf converted without this being a new cost. And we
> then gain improved memory management from it.
>
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
Convert AccessLogEntry::url to an SBuf. This eliminates at least one memory leak.

This work is submitted on behalf of Bloomberg L.P.

=== modified file 'src/AccessLogEntry.h'
--- src/AccessLogEntry.h	2016-03-11 15:03:20 +
+++ src/AccessLogEntry.h	2016-03-14 04:13:24 +
@@ -21,6 +21,7 @@
 #include "LogTags.h"
 #include "MessageSizes.h"
 #include "Notes.h"
+#include "SBuf.h"
 #if ICAP_CLIENT
 #include "adaptation/icap/Elements.h"
 #endif
@@ -56,7 +57,7 @@
 /// Fetch the transaction method string (ICP opcode, HTCP opcode or HTTP method)
 SBuf getLogMethod() const;
 
-const char *url;
+SBuf url;
 
 /// TCP/IP level details about the client connection
 Comm::ConnectionPointer tcpClient;

=== modified file 'src/acl/FilledChecklist.cc'
--- src/acl/FilledChecklist.cc	2016-01-31 12:05:30 +
+++ src/acl/FilledChecklist.cc	2016-03-14 04:16:21 +
@@ -103,9 +103,9 @@
 HTTPMSGLOCK(al->adapted_request);
 }
 
-if (!al->url) {
+if (al->url.isEmpty()) {
 showDebugWarning("URL");
-al->url = xstrdup(request->url.absolute().c_str());
+al->url = request->url.absolute();
 }
 }
 

=== modified file 'src/format/Format.cc'
--- src/format/Format.cc	2016-03-11 15:03:20 +
+++ src/format/Format.cc	2016-03-14 04:16:21 +
@@ -1039,7 +1039,9 @@
 break;
 
 case LFT_REQUEST_URI:
-out = al->url;
+if (!al->url.isEmpty()) {
+out = al->url.c_str();
+}
 break;
 
 case LFT_REQUEST_VERSION_OLD_2X:

=== modified file 'src/log/FormatHttpdCombined.cc'
--- src/log/FormatHttpdCombined.cc	2016-01-01 00:12:18 +
+++ src/log/FormatHttpdCombined.cc	2016-03-14 04:16:21 +
@@ -53,7 +53,7 @@

Re: [squid-dev] [PATCH] Ensure any previous value in lastAclData is free'd

2016-03-10 Thread Nathan Hoad
Sure, that seems straightforward enough. Attached is a patch that
migrates lastAclData to an SBuf.

Thank you,

Nathan.

On 11 March 2016 at 14:38, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 11/03/2016 11:56 a.m., Nathan Hoad wrote:
>> Hello,
>>
>> Attached is a patch that ensures lastAclData doesn't leak any data,
>> exactly the same as lastAclName.
>>
>> In my testing, this wasn't leaking very much memory, but a leak regardless.
>>
>
> +1.
>
> Since the 'sb' variable it's being set from is an SBuf, and the format
> codes can now handle SBuf too - would you mind taking a stab at making
> lastAclData itself an SBuf?
>  That should get rid of some data-copy cycles and handle any related
> leaks hiding elsewhere.
>
> (just a request, if not that is fine and this can go in instead).
>
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
Migrate lastAclData to an SBuf.

This is submitted on behalf of Bloomberg L.P.

=== modified file 'src/AccessLogEntry.cc'
--- src/AccessLogEntry.cc	2016-01-01 00:12:18 +
+++ src/AccessLogEntry.cc	2016-03-11 04:07:32 +
@@ -76,7 +76,6 @@
 HTTPMSGUNLOCK(adapted_request);
 
 safe_free(lastAclName);
-safe_free(lastAclData);
 
 HTTPMSGUNLOCK(reply);
 HTTPMSGUNLOCK(request);

=== modified file 'src/AccessLogEntry.h'
--- src/AccessLogEntry.h	2016-01-01 00:12:18 +
+++ src/AccessLogEntry.h	2016-03-11 04:25:25 +
@@ -42,7 +42,6 @@
 AccessLogEntry() :
 url(nullptr),
 lastAclName(nullptr),
-lastAclData(nullptr),
 reply(nullptr),
 request(nullptr),
 adapted_request(nullptr)
@@ -207,7 +206,7 @@
 #endif
 
 const char *lastAclName; ///< string for external_acl_type %ACL format code
-const char *lastAclData; ///< string for external_acl_type %DATA format code
+SBuf lastAclData; ///< string for external_acl_type %DATA format code
 
 HierarchyLogEntry hier;
 HttpReply *reply;

=== modified file 'src/external_acl.cc'
--- src/external_acl.cc	2016-02-19 17:19:25 +
+++ src/external_acl.cc	2016-03-11 04:07:33 +
@@ -742,7 +742,7 @@
 }
 }
 
-ch->al->lastAclData = xstrdup(sb.c_str());
+ch->al->lastAclData = sb;
 }
 
 #if USE_IDENT

=== modified file 'src/format/Format.cc'
--- src/format/Format.cc	2016-02-13 07:51:20 +
+++ src/format/Format.cc	2016-03-11 04:07:33 +
@@ -1381,7 +1381,7 @@
 break;
 
 case LFT_EXT_ACL_DATA:
-out = al->lastAclData;
+out = al->lastAclData.c_str();
 break;
 }
 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PREVIEW] Free AccessLogEntry::url when needed

2016-03-10 Thread Nathan Hoad
Hello,

The attached patch is a demonstration of a memory leak on
AccessLogentry::url, created by ACLFilledChecklist::syncAle(). The
patch itself is not good enough quality to warrant acceptance, and no
doubt introduces bugs.

::syncAle() is the only place in the codebase that assigns a URL that
AccessLogEntry is expected to free(), which AccessLogEntry doesn't do.
This results in a memory leak.

As mentioned, the patch is of questionable quality, as the chunk in
src/client_side.cc shows. This sort of change would have to be made
everywhere that url is assigned to, which isn't practical.

It would be good to modify ::syncAle() so that it doesn't allocate a
new URL, but I'm not sure how that could be achieved. I think the only
workable alternative is to change AccessLogEntry::url to a more
intelligent string type. There is also the possibility of making the
member private and using a setter to ensure it is always xstrdup'd
from another URL.

If someone with better knowledge than me would like to make a
recommendation on what to do here, I'd be happy to work towards a
better solution.

Thank you,

Nathan.
Demonstration of a memory leak in Squid via an incomplete patch.

This is submitted on behalf of Bloomberg L.P.
=== modified file 'src/AccessLogEntry.cc'
--- src/AccessLogEntry.cc	2016-01-01 00:12:18 +
+++ src/AccessLogEntry.cc	2016-03-10 23:06:37 +
@@ -72,6 +72,10 @@
 
 safe_free(headers.reply);
 
+if (free_url) {
+safe_free(url);
+}
+
 safe_free(headers.adapted_request);
 HTTPMSGUNLOCK(adapted_request);
 

=== modified file 'src/AccessLogEntry.h'
--- src/AccessLogEntry.h	2016-01-01 00:12:18 +
+++ src/AccessLogEntry.h	2016-03-10 23:06:59 +
@@ -41,6 +41,7 @@
 
 AccessLogEntry() :
 url(nullptr),
+free_url(false),
 lastAclName(nullptr),
 lastAclData(nullptr),
 reply(nullptr),
@@ -58,6 +59,7 @@
 SBuf getLogMethod() const;
 
 const char *url;
+bool free_url;
 
 /// TCP/IP level details about the client connection
 Comm::ConnectionPointer tcpClient;

=== modified file 'src/acl/FilledChecklist.cc'
--- src/acl/FilledChecklist.cc	2016-01-31 12:05:30 +
+++ src/acl/FilledChecklist.cc	2016-03-10 23:06:08 +
@@ -106,6 +106,7 @@
 if (!al->url) {
 showDebugWarning("URL");
 al->url = xstrdup(request->url.absolute().c_str());
+al->free_url = true;
 }
 }
 

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2016-02-13 07:51:20 +
+++ src/client_side.cc	2016-03-10 23:06:36 +
@@ -383,6 +383,10 @@
 debugs(33, 5, "logging half-baked transaction: " << log_uri);
 
 al->icp.opcode = ICP_INVALID;
+if (al->url && al->free_url) {
+safe_free(al->url);
+al->free_url = false;
+}
 al->url = log_uri;
 debugs(33, 9, "clientLogRequest: al.url='" << al->url << "'");
 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Ensure any previous value in lastAclData is free'd

2016-03-10 Thread Nathan Hoad
Hello,

Attached is a patch that ensures lastAclData doesn't leak any data,
exactly the same as lastAclName.

In my testing, this wasn't leaking very much memory, but a leak regardless.

Thank you,

Nathan.
Free any previous data stored in lastAclData.

This is submitted on behalf of Bloomberg L.P.

=== modified file 'src/external_acl.cc'
--- src/external_acl.cc	2016-02-19 17:19:25 +
+++ src/external_acl.cc	2016-03-10 22:42:38 +
@@ -742,6 +742,7 @@
 }
 }
 
+safe_free(ch->al->lastAclData);
 ch->al->lastAclData = xstrdup(sb.c_str());
 }
 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Fix HttpRequest object leaks in Squid 4.0.x

2016-03-09 Thread Nathan Hoad
Hello,

The attached patch fixes a rather profuse memory leak in the Squid
4.0.x series where under certain conditions, HttpRequest objects would
get lost. I can provide more information here if requested,
specifically a Valgrind trace and configuration to reproduce this.

I suggest that a better long term solution would be to make the
members private, and have setter methods to ensure the correct
unlocking occurs. Alternatively, using a smart pointer, e.g.
Http::HttpRequestPointer. If people have any recommendations here, let
me know and I can work towards this.

Thank you,

Nathan.

P.S. I have two more patches that fix less severe memory leaks that
are quite distinct from one another. Should I submit them as a single
patch, or multiple patches? At least one of the patches will involve
some discussion on how to improve it. They are not large.
Ensure any previous HttpRequest objects on AccessLogEntry are unlocked, to ensure they don't leak memory.

With this change, all current assignments to al->request and
al->adapted_request are guarded by either a nullptr check, or by
HTTPMSGUNLOCK()'ing the previous request object first.

This is submitted on behalf of Bloomberg L.P.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2016-02-13 07:51:20 +
+++ src/client_side.cc	2016-03-10 03:59:42 +
@@ -446,6 +446,7 @@
 }
 
 if (request) {
+HTTPMSGUNLOCK(al->adapted_request);
 al->adapted_request = request;
 HTTPMSGLOCK(al->adapted_request);
 }
@@ -2820,6 +2821,7 @@
 acl_checklist->al->tcpClient = clientConnection;
 acl_checklist->al->cache.port = port;
 acl_checklist->al->cache.caddr = log_addr;
+HTTPMSGUNLOCK(acl_checklist->al->request);
 acl_checklist->al->request = request;
 HTTPMSGLOCK(acl_checklist->al->request);
 acl_checklist->nonBlockingCheck(httpsSslBumpAccessCheckDone, this);

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Add chained certificates and signing certificate to bumpAndSpliced connections

2016-02-21 Thread Nathan Hoad
Hello,

I've just started some clean up of local patches in preparation of
upgrading to Squid 4, and I've noticed this hasn't been applied. What
do I need to do to get this applied?

Thank you,

Nathan.

On 19 June 2015 at 18:26, Tsantilas Christos
<chtsa...@users.sourceforge.net> wrote:
> The patch should applied to trunk.
>
>
> On 06/19/2015 04:26 AM, Amos Jeffries wrote:
>>
>> On 7/06/2015 2:41 a.m., Nathan Hoad wrote:
>>>
>>> Hello,
>>>
>>> Attached is a patch making the changes recommended by Christos. I've
>>> done as described, creating a Ssl::configureUnconfiguredSslContext
>>> function, rather than making the changes to Ssl::configureSSL.
>>
>>
>> Christos, can you please review and apply if it is acceptible to you?
>>
>> Cheers
>> Amos
>>
>> ___
>> squid-dev mailing list
>> squid-dev@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-dev
>>
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Add chained certificates and signing certificate to bumpAndSpliced connections

2015-06-06 Thread Nathan Hoad
Hello,

Attached is a patch making the changes recommended by Christos. I've
done as described, creating a Ssl::configureUnconfiguredSslContext
function, rather than making the changes to Ssl::configureSSL.

Thank you,

Nathan.

On 23 May 2015 at 19:14, Tsantilas Christos
chtsa...@users.sourceforge.net wrote:
 Hi Nathan,

  The patch works.

 However I believe It is not good idea to configure SSL_CTX objects while we
 are setting parameters to an SSL object.
 A SSL_CTX object is common to many SSL objects.

 Instead of setting SSL_CTX object from
 configureSSLUsingPkeyAndCertFromMemory I am suggesting a new method
 configureUnconfigureCTX() which does the job:

 Then inside client_side use:

  bool ret = Ssl::configureSSLUsingPkeyAndCertFromMemory(...);
   if (!ret)
 debugs(33, 5, mpla mpla);
  SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
  ret = configureUnconfigureCTX(sslContext,..., signAlgorithm)


 OR

   Ssl::configureSSL(ssl, certProperties, *port))
   SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
   ret = configureUnconfigureCTX(sslContext,..., signAlgorithm)


 Probably the above should be wrapped to a new method.
 Or  maybe a new function which its name says that both CTX and SSL objects
 are modified.



 On 04/30/2015 08:11 AM, Nathan Hoad wrote:

 Hello,

 I am running Squid with SSL bump in bump and splice mode, and I've
 observed that this mode does not append the signing certificate or any
 chained certificates to the certificate chain presented to the client.

 With old bump mode, Squid adds the signing certificate and any other
 chained certificates to the SSL context. With bump and splice mode,
 these certificates are not added. Attached is a patch that adds these
 certificates for bump and spliced connections.

 Thank you,

 Nathan.



 ___
 squid-dev mailing list
 squid-dev@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-dev


 ___
 squid-dev mailing list
 squid-dev@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-dev
=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-06-05 23:41:22 +
+++ src/client_side.cc	2015-06-06 13:56:01 +
@@ -3971,6 +3971,9 @@
 bool ret = Ssl::configureSSLUsingPkeyAndCertFromMemory(ssl, reply_message.getBody().c_str(), *port);
 if (!ret)
 debugs(33, 5, Failed to set certificates to ssl object for PeekAndSplice mode);
+
+SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
+Ssl::configureUnconfiguredSslContext(sslContext, signAlgorithm, *port);
 } else {
 SSL_CTX *ctx = Ssl::generateSslContextUsingPkeyAndCertFromMemory(reply_message.getBody().c_str(), *port);
 getSslContextDone(ctx, true);
@@ -4129,6 +4132,9 @@
 SSL *ssl = fd_table[clientConnection-fd].ssl;
 if (!Ssl::configureSSL(ssl, certProperties, *port))
 debugs(33, 5, Failed to set certificates to ssl object for PeekAndSplice mode);
+
+SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
+Ssl::configureUnconfiguredSslContext(sslContext, certProperties.signAlgorithm, *port);
 } else {
 SSL_CTX *dynCtx = Ssl::generateSslContext(certProperties, *port);
 getSslContextDone(dynCtx, true);
@@ -4144,17 +4150,10 @@
 // Try to add generated ssl context to storage.
 if (port-generateHostCertificates  isNew) {
 
-if (signAlgorithm == Ssl::algSignTrusted) {
-// Add signing certificate to the certificates chain
-X509 *cert = port-signingCert.get();
-if (SSL_CTX_add_extra_chain_cert(sslContext, cert)) {
-// increase the certificate lock
-CRYPTO_add((cert-references),1,CRYPTO_LOCK_X509);
-} else {
-const int ssl_error = ERR_get_error();
-debugs(33, DBG_IMPORTANT, WARNING: can not add signing certificate to SSL context chain:   ERR_error_string(ssl_error, NULL));
-}
-Ssl::addChainToSslContext(sslContext, port-certsToChain.get());
+if (sslContext  (signAlgorithm == Ssl::algSignTrusted)) {
+Ssl::chainCertificatesToSSLContext(sslContext, *port);
+} else if (signAlgorithm == Ssl::algSignTrusted) {
+debugs(33, DBG_IMPORTANT, WARNING: can not add signing certificate to SSL context chain because SSL context chain is invalid!);
 }
 //else it is self-signed or untrusted do not attrach any certificate
 

=== modified file 'src/ssl/support.cc'
--- src/ssl/support.cc	2015-06-05 23:41:22 +
+++ src/ssl/support.cc	2015-06-06 14:15:43 +
@@ -1616,6 +1616,30 @@
 return createSSLContext(cert, pkey, port);
 }
 
+void
+Ssl::chainCertificatesToSSLContext(SSL_CTX *sslContext, AnyP::PortCfg port)
+{
+assert(sslContext != NULL);
+// Add

[squid-dev] [PATCH] Consistently apply notes from cached external ACL replies

2015-05-08 Thread Nathan Hoad
Hello,

Attached is a patch that adds notes from cached ACL helpers to request objects.

Without this patch, when you have an external ACL that replies with
notes, they are only added onto the HTTP request that kicked off the
external ACL lookup, and not cached ACL responses. This means if you
set notes from an external ACL that are used for some processing in
other ACLs, or post-processing on logs, things may be missed.

To illustrate what I mean...

external_acl_type foo %DST /path/to/fakehelper.py
acl foo external foo
http_access allow foo

logformat foolog %ru %note %et
access_log /path/to/access.log foolog

fakehelper is a script that always replies with OK tag=fromacl
foo=bar bar=baz.

Access log prior to the patch:

http://google.com/ foo:%20bar%0D%0Atag:%20fromacl%0D%0Abar:%20baz%0D%0A fromacl
http://google.com/ - fromacl

Access log after the patch:

http://google.com/ foo:%20bar%0D%0Atag:%20fromacl%0D%0Abar:%20baz%0D%0A fromacl
http://google.com/ foo:%20bar%0D%0Atag:%20fromacl%0D%0Abar:%20baz%0D%0A fromacl

Thank you,

Nathan.
=== modified file 'src/external_acl.cc'
--- src/external_acl.cc	2015-01-13 09:13:49 +
+++ src/external_acl.cc	2015-05-08 10:17:07 +
@@ -706,6 +706,12 @@
 
 if (entry-message.size())
 req-extacl_message = entry-message;
+
+// attach the helper kv-pair to the transaction
+// XXX: we have no access to the transaction / AccessLogEntry so cant SyncNotes().
+// workaround by using anything already set in HttpRequest
+// OR use new and rely on a later Sync copying these to AccessLogEntry
+UpdateRequestNotes(req-clientConnectionManager.get(), *req, entry-notes);
 }
 }
 
@@ -1533,17 +1539,6 @@
 ACLFilledChecklist *checklist = Filled(static_castACLChecklist*(data));
 checklist-extacl_entry = result;
 
-// attach the helper kv-pair to the transaction
-if (checklist-extacl_entry != NULL) {
-if (HttpRequest * req = checklist-request) {
-// XXX: we have no access to the transaction / AccessLogEntry so cant SyncNotes().
-// workaround by using anything already set in HttpRequest
-// OR use new and rely on a later Sync copying these to AccessLogEntry
-
-UpdateRequestNotes(checklist-conn(), *req, checklist-extacl_entry-notes);
-}
-}
-
 checklist-resumeNonBlockingCheck(ExternalACLLookup::Instance());
 }
 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Add chained certificates and signing certificate to bumpAndSpliced connections

2015-05-05 Thread Nathan Hoad
On 1 May 2015 at 18:36, Amos Jeffries squ...@treenet.co.nz wrote:

 Which mode? bump or splice or peek-then-bump or peek-then-splice mode?

Peek then bump! Apologies, I was incredibly unclear on that.


 The valid operations are different in each combination of operations.

 bump mode is equivalent to the old client-first. Where squid is
 generating teh whole certificate chain sent to the client.

 splice or peek-then-splice modes are both equivalent to not bumping at
 all. Squid is expected to pass on exactly what the server emits, no
 matter how screwed.

 peek-then-bump is equivalent to server-first mode. Where the
 certificates generated by Squid are expected to mimic what the server
 sent. The full chain is only expected IF the server sent its whole chain.

The scenario this patch addresses is when Squid is configured with an
intermediate signing CA certificate, and clients have the root CA
installed on their machines. What happens is that the generated
certificates come down with an unknown issuer (the intermediate
signing certificate), with no intermediates, so they are rejected. By
adding the configured certificate chain as old client-first mode did,
the intermediate and root certificates come down as well, resulting in
the issuer being identified and the connection being established
securely.


 Amos

 ___
 squid-dev mailing list
 squid-dev@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-dev
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Add chained certificates and signing certificate to bumpAndSpliced connections

2015-04-30 Thread Nathan Hoad
Hello,

I am running Squid with SSL bump in bump and splice mode, and I've
observed that this mode does not append the signing certificate or any
chained certificates to the certificate chain presented to the client.

With old bump mode, Squid adds the signing certificate and any other
chained certificates to the SSL context. With bump and splice mode,
these certificates are not added. Attached is a patch that adds these
certificates for bump and spliced connections.

Thank you,

Nathan.
=== modified file 'src/client_side.cc'
--- src/client_side.cc	2015-03-21 07:53:13 +
+++ src/client_side.cc	2015-04-30 04:03:52 +
@@ -3968,7 +3968,7 @@
 if (sslServerBump  (sslServerBump-act.step1 == Ssl::bumpPeek || sslServerBump-act.step1 == Ssl::bumpStare)) {
 doPeekAndSpliceStep();
 SSL *ssl = fd_table[clientConnection-fd].ssl;
-bool ret = Ssl::configureSSLUsingPkeyAndCertFromMemory(ssl, reply_message.getBody().c_str(), *port);
+bool ret = Ssl::configureSSLUsingPkeyAndCertFromMemory(ssl, reply_message.getBody().c_str(), *port, signAlgorithm);
 if (!ret)
 debugs(33, 5, Failed to set certificates to ssl object for PeekAndSplice mode);
 } else {
@@ -4144,17 +4144,10 @@
 // Try to add generated ssl context to storage.
 if (port-generateHostCertificates  isNew) {
 
-if (signAlgorithm == Ssl::algSignTrusted) {
-// Add signing certificate to the certificates chain
-X509 *cert = port-signingCert.get();
-if (SSL_CTX_add_extra_chain_cert(sslContext, cert)) {
-// increase the certificate lock
-CRYPTO_add((cert-references),1,CRYPTO_LOCK_X509);
-} else {
-const int ssl_error = ERR_get_error();
-debugs(33, DBG_IMPORTANT, WARNING: can not add signing certificate to SSL context chain:   ERR_error_string(ssl_error, NULL));
-}
-Ssl::addChainToSslContext(sslContext, port-certsToChain.get());
+if (sslContext  (signAlgorithm == Ssl::algSignTrusted)) {
+Ssl::chainCertificatesToSSLContext(sslContext, *port);
+} else if (signAlgorithm == Ssl::algSignTrusted) {
+debugs(33, DBG_IMPORTANT, WARNING: can not add signing certificate to SSL context chain because SSL context chain is invalid!);
 }
 //else it is self-signed or untrusted do not attrach any certificate
 

=== modified file 'src/ssl/support.cc'
--- src/ssl/support.cc	2015-01-24 04:53:39 +
+++ src/ssl/support.cc	2015-04-30 04:00:12 +
@@ -1587,6 +1587,33 @@
 return createSSLContext(cert, pkey, port);
 }
 
+void
+Ssl::chainCertificatesToSSL(SSL *ssl, AnyP::PortCfg port)
+{
+SSL_CTX *sslContext = SSL_get_SSL_CTX(ssl);
+if (sslContext) {
+Ssl::chainCertificatesToSSLContext(sslContext, port);
+} else {
+debugs(33, DBG_IMPORTANT, Could not retrieve SSL_CTX from SSL object);
+}
+}
+
+void
+Ssl::chainCertificatesToSSLContext(SSL_CTX *sslContext, AnyP::PortCfg port)
+{
+assert(sslContext != NULL);
+// Add signing certificate to the certificates chain
+X509 *signingCert = port.signingCert.get();
+if (SSL_CTX_add_extra_chain_cert(sslContext, signingCert)) {
+// increase the certificate lock
+CRYPTO_add((signingCert-references),1,CRYPTO_LOCK_X509);
+} else {
+const int ssl_error = ERR_get_error();
+debugs(33, DBG_IMPORTANT, WARNING: can not add signing certificate to SSL context chain:   ERR_error_string(ssl_error, NULL));
+}
+Ssl::addChainToSslContext(sslContext, port.certsToChain.get());
+}
+
 bool
 Ssl::configureSSL(SSL *ssl, CertificateProperties const properties, AnyP::PortCfg port)
 {
@@ -1607,11 +1634,15 @@
 if (!SSL_use_PrivateKey(ssl, pkey.get()))
 return false;
 
+if (properties.signAlgorithm == Ssl::algSignTrusted) {
+Ssl::chainCertificatesToSSL(ssl, port);
+}
+
 return true;
 }
 
 bool
-Ssl::configureSSLUsingPkeyAndCertFromMemory(SSL *ssl, const char *data, AnyP::PortCfg port)
+Ssl::configureSSLUsingPkeyAndCertFromMemory(SSL *ssl, const char *data, AnyP::PortCfg port, CertSignAlgorithm signAlgorithm)
 {
 Ssl::X509_Pointer cert;
 Ssl::EVP_PKEY_Pointer pkey;
@@ -1627,6 +1658,10 @@
 if (!SSL_use_PrivateKey(ssl, pkey.get()))
 return false;
 
+if (signAlgorithm == Ssl::algSignTrusted) {
+Ssl::chainCertificatesToSSL(ssl, port);
+}
+
 return true;
 }
 

=== modified file 'src/ssl/support.h'
--- src/ssl/support.h	2015-01-13 09:13:49 +
+++ src/ssl/support.h	2015-04-30 04:00:12 +
@@ -222,6 +222,18 @@
 SSL_CTX * createSSLContext(Ssl::X509_Pointer  x509, Ssl::EVP_PKEY_Pointer  pkey, AnyP::PortCfg port);
 
 /**
+ \ingroup ServerProtocolSSLAPI
+ * Chain signing certificate and chained certificates to an SSL Context
+ */
+void 

Re: issue with ICAP message for redirecting HTTPS/CONNECT

2014-06-08 Thread Nathan Hoad
Hi Marcus,

There's a bug in your ICAP server with how it's handling the
Encapsulated header that it sends back to Squid. This is what your
server sent back to Squid for a REQMOD request:

Encapsulated: res-hdr=0, null-body=1930d
X-Next-Services: 0d
0d
CONNECT blockedhttps.urlfilterdb.com:443 HTTP/1.00d
  -- NOTE: also fails: CONNECT https://blockedhttps.urlfilterdb.com
HTTP/1.00d

snipped for brevity

The Encapsulated header says that the HTTP object that has been sent
back contains HTTP response headers, and no body. This leads Squid to
believe it should be parsing a HTTP response, which expects the first
token of the first line to begin with HTTP/, which is failing because
the server has actually sent back a HTTP request. This explains the
error in the logs, and why it's working for your GET and POST
responses, which do indeed contain HTTP response objects.

So for this particular example, the correct Encapsulated header value
would be 'req-hdr=0, null-body=193'.

I hope that helps,

Nathan.

--
Nathan Hoad
Software Developer
www.getoffmalawn.com


On 9 June 2014 00:22, Marcus Kool marcus.k...@urlfilterdb.com wrote:
 I ran into an issue with the ICAP interface.
 The issue is that a GET/HTTP-based URL can be successfully rewritten but a
 CONNECT/HTTPS-based URL cannot.  I used debug_options ALL,9 to find out what
 is going wrong
 but I fail to understand Squid.

 GET/HTTP to http://googleads.g.doubleclick.net works:

 Squid writes:
 REQMOD icap://127.0.0.1:1344/reqmod_icapd_squid34 ICAP/1.00d
 Host: 127.0.0.1:13440d
 Date: Sun, 08 Jun 2014 13:54:09 GMT0d
 Encapsulated: req-hdr=0, null-body=1350d
 Preview: 00d
 Allow: 2040d
 X-Client-IP: 127.0.0.10d
 0d
 GET http://googleads.g.doubleclick.net/ HTTP/1.00d
 User-Agent: Wget/1.12 (linux-gnu)0d
 Accept: */*0d
 Host: googleads.g.doubleclick.net0d
 0d

 ICAP daemon responds:
 ICAP/1.0 200 OK0d
 Server: ufdbICAPd/1.00d
 Date: Sun, 08 Jun 2014 13:54:09 GMT0d
 ISTag: 5394572c-45670d
 Connection: keep-alive0d
 Encapsulated: res-hdr=0, null-body=2330d
 X-Next-Services: 0d
 0d
 HTTP/1.0 200 OK0d
 Date: Sun, 08 Jun 2014 13:54:09 GMT0d
 Server: ufdbICAPd/1.00d
 Last-Modified: Sun, 08 Jun 2014 13:54:09 GMT0d
 ETag: 498a-0001-5394572c-45670d
 Cache-Control: max-age=100d
 Content-Length: 00d
 Content-Type: text/html0d
 0d
 00d
 0d


 CONNECT/HTTPS does not work:

 Squid writes:
 REQMOD icap://127.0.0.1:1344/reqmod_icapd_squid34 ICAP/1.00d
 Host: 127.0.0.1:13440d
 Date: Sun, 08 Jun 2014 12:29:32 GMT0d
 Encapsulated: req-hdr=0, null-body=870d
 Preview: 00d
 Allow: 2040d
 X-Client-IP: 127.0.0.10d
 0d
 CONNECT googleads.g.doubleclick.net:443 HTTP/1.00d
 User-Agent: Wget/1.12 (linux-gnu)0d
 0d

 ICAP daemon responds:
 ICAP/1.0 200 OK0d
 Server: ufdbICAPd/1.00d
 Date: Sun, 08 Jun 2014 12:29:32 GMT0d
 ISTag: 5394572c-45670d
 Connection: keep-alive0d
 Encapsulated: res-hdr=0, null-body=1930d
 X-Next-Services: 0d
 0d
 CONNECT blockedhttps.urlfilterdb.com:443 HTTP/1.00d--
 NOTE: also fails: CONNECT https://blockedhttps.urlfilterdb.com HTTP/1.00d
 Host: blockedhttps.urlfilterdb.com0d
 User-Agent: Wget/1.12 (linux-gnu)0d
 X-blocked-URL: googleads.g.doubleclick.net0d
 X-blocked-category: ads0d
 0d
 00d
 0d

 and Squid in the end responds to wget:
 HTTP/1.1 500 Internal Server Error
 Server: squid/3.4.5
 Mime-Version: 1.0
 Date: Sun, 08 Jun 2014 13:59:27 GMT
 Content-Type: text/html
 Content-Length: 2804
 X-Squid-Error: ERR_ICAP_FAILURE 0
 Vary: Accept-Language
 Content-Language: en
 X-Cache: MISS from XXX
 X-Cache-Lookup: NONE from XXX:3128
 Via: 1.1 XXX (squid/3.4.5)
 Connection: close

 A fragment of cache.log is below.
 I think that the line
 HttpReply.cc(460) sanityCheckStartLine: HttpReply::sanityCheckStartLine:
 missing protocol prefix (HTTP/) in 'CONNECT blockedhttps.urlfilterdb.com:443
 HTTP/1.00d
 indicates where the problem is.

 Questions:
 The ICAP reply has a HTTP/ protocol prefix so does Squid have a problem
 parsing the reply?

 What is the issue with the reply of the ICAP daemon?

 Not directly related, but interesting: why does Squid sends
CONNECT googleads.g.doubleclick.net:443 HTTP/1.0
 to the ICAP daemon instead of
CONNECT https://googleads.g.doubleclick.net HTTP/1.0

 Thanks
 Marcus

  cache.log:
 -
 2014/06/08 09:29:32.224 kid1| Xaction.cc(413) noteCommRead: read 384 bytes
 2014/06/08 09:29:32.224 kid1| Xaction.cc(73) disableRetries:
 Adaptation::Icap::ModXact from now on cannot be retried  [FD 12;rG/RwP(ieof)
 job9]
 2014/06/08 09:29:32.224 kid1| ModXact.cc(646) parseMore: have 384 bytes to
 parse [FD 12;rG/RwP(ieof) job9]
 2014/06/08 09:29:32.224 kid1| ModXact.cc(647) parseMore:
 ICAP/1.0 200 OK0d
 Server: ufdbICAPd/1.00d
 Date: Sun, 08 Jun 2014 12:29:32 GMT0d
 ISTag: 5394572c-45670d
 Connection: keep-alive0d
 Encapsulated: res-hdr=0, null-body=1930d
 X-Next-Services: 0d
 0d
 CONNECT blockedhttps.urlfilterdb.com:443 HTTP/1.00d
 Host: blockedhttps.urlfilterdb.com0d

[PATCH] Make OPTIONS requests respect icap_persistent_connections

2013-10-17 Thread Nathan Hoad
Hello,

The attached patch appends Connection: close to OPTIONS requests if
icap_persistent_connections is set to 'off'. This is inline with the
current behaviour for REQMOD and RESPMOD requests.

Thank you,

Nathan.


icap-persistent-connections-for-options-requests.patch
Description: Binary data


[PATCH] Fix an assert with side effects in ServerStateData::handleAdaptedHeader

2013-07-25 Thread Nathan Hoad
This patch simply moves the
adaptedBodySource-setConsumerIfNotLate(this) call out of an assert()
in ServerStateData::handleAdaptedHeader.

setConsumerIfNotLate conditionally schedules some async calls, so the
call definitely shouldn't be removed by the precompiler.

--
Nathan Hoad
Software Developer
www.getoffmalawn.com


assert-with-side-effects.patch
Description: Binary data


Introductions

2013-07-13 Thread Nathan Hoad
Hi folks,

I realised I'm not a member of the developer mailing list, so I
thought I'd sign up. My interests in Squid development mostly lie in
stablisation and scaling, with lesser interest in Squid's ICAP client.
I also do my part in maintaining a few deployments of Squid with
various interesting configurations.

I'm also interested in helping keep the website up to date as well.

Regards,

Nathan.

--
Nathan Hoad
Software Developer
www.getoffmalawn.com


[PATCH] Copy external ACL username and password attributes in HttpRequest::inheritProperties for logging

2013-07-11 Thread Nathan Hoad
When a request is successfully adapted, the external ACL username and
password are now inherited with this patch. This means the
LFT_USER_NAME log token can display the username from an external ACL
if available, for adapted requests.

The HttpRequest will inherit the password for good measure as well -
while none too useful, it seems strange to inherit the username but
not the password.

--
Nathan Hoad
Software Developer
www.getoffmalawn.com


external_acl_auth_not_inherited_on_HttpRequest.patch
Description: Binary data