On 16/05/2013 3:45 a.m., Alex Rousskov wrote:
On 05/15/2013 04:45 AM, Amos Jeffries wrote:
On 15/05/2013 10:20 a.m., Alex Rousskov wrote:
On 05/11/2013 10:00 PM, Amos Jeffries wrote:
The attached patch adds a built-time setting SQUID_PIPELINE_MAX to
replace the magic numbers previously used for limiting the amount of
prefetch Squid does when pipeline_prefeth config directive is enabled.
The default remains at 2 unless altered by using -DSQUID_PIPELINE_MAX=n
in CXXFLAGS and this adds some documentation about why, and what can be
done to improve pipelining in future.
I do not understand why we need a hard-coded maximum.
I'm not completely sure myself. I think I've checked about half the code
involved and most of it it nicely using linked-list pointer checks. Just
these two input points using magic 2's.
I know that there are protocol use cases where things can happen which
cause the pipeline to needs discarding and that causes all sorts of edge
cases. I'm gettign the idea these 2's were there solely to avoid dealing
with that mess.
PS. When we are certain its safe it will be easy to change
pipeline_prefetch into a numeric directive setting a max limit and some
smart code doing dynamic adjusting. But for now I'm simply not sure
enough about what the rest of the code is doing to go go further than
removing the magic numbers - as we audit and find other code relying on
the magics we can alter them to rely on this macro easily enough.
FWIW, I think we should make the limit configurable because I do not
think there are any valid reasons to restrict _all_ pipelining
deployments to 2 pipelined requests.
If the plan is to make the limit configurable, I do not see the point of
replacing a magic number if a #define (which will, in turn, be removed
when the limit is configurable). However, if you are sure this change is
needed, feel free to commit.
Very well. Attached is an update which goes straight to the end and make
pipeline_prefetch directive accept an integer count of pipeline length.
There are some additional polishing that I would like to do to the
pipeline API of ConnStateData, but that will come in a followup patch.
Amos
=== modified file 'doc/release-notes/release-3.4.sgml'
--- doc/release-notes/release-3.4.sgml 2013-05-14 17:54:30 +0000
+++ doc/release-notes/release-3.4.sgml 2013-05-16 03:38:06 +0000
@@ -179,6 +179,9 @@
<p>New format code <em>%note</em> to log a transaction annotation
linked to the
transaction by ICAP, eCAP, a helper, or the <em>note</em> squid.conf
directive.
+ <tag>pipeline_prefetch</tag>
+ <p>Updated to take a numeric count of prefetched pipeline requests
instead of ON/OFF.
+
<tag>unlinkd_program</tag>
<p>New helper response format utilizing result codes <em>OK</em> and
<em>BH</em>,
to signal helper lookup results. Also, key-value response values to
return
=== modified file 'src/Parsing.cc'
--- src/Parsing.cc 2013-05-04 13:14:23 +0000
+++ src/Parsing.cc 2013-05-16 13:55:49 +0000
@@ -33,6 +33,7 @@
#include "squid.h"
#include "cache_cf.h"
#include "compat/strtoll.h"
+#include "ConfigParser.h"
#include "Parsing.h"
#include "globals.h"
#include "Debug.h"
@@ -161,7 +162,7 @@
int
GetInteger(void)
{
- char *token = strtok(NULL, w_space);
+ char *token = ConfigParser::strtokFile();
int i;
if (token == NULL)
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h 2013-05-13 03:57:03 +0000
+++ src/SquidConfig.h 2013-05-16 03:21:38 +0000
@@ -332,7 +332,6 @@
int ie_refresh;
int vary_ignore_expire;
- int pipeline_prefetch;
int surrogate_is_remote;
int request_entities;
int detect_broken_server_pconns;
@@ -361,6 +360,8 @@
int client_dst_passthru;
} onoff;
+ int pipeline_max_prefetch;
+
int forward_max_tries;
int connect_retries;
=== modified file 'src/cache_cf.cc'
--- src/cache_cf.cc 2013-05-14 18:36:45 +0000
+++ src/cache_cf.cc 2013-05-16 13:00:04 +0000
@@ -973,12 +973,12 @@
* pipelining OFF, the client may fail to authenticate, but squid's
* state will be preserved.
*/
- if (Config.onoff.pipeline_prefetch) {
+ if (Config.pipeline_max_prefetch > 0) {
Auth::Config *nego = Auth::Config::Find("Negotiate");
Auth::Config *ntlm = Auth::Config::Find("NTLM");
if ((nego && nego->active()) || (ntlm && ntlm->active())) {
debugs(3, DBG_IMPORTANT, "WARNING: pipeline_prefetch breaks NTLM
and Negotiate authentication. Forced OFF.");
- Config.onoff.pipeline_prefetch = 0;
+ Config.pipeline_max_prefetch = 0;
}
}
#endif
@@ -2691,6 +2691,29 @@
#define free_tristate free_int
+void
+parse_pipelinePrefetch(int *var)
+{
+ char *token = ConfigParser::strtokFile();
+
+ if (token == NULL)
+ self_destruct();
+
+ if (!strcmp(token, "on")) {
+ debugs(0, DBG_PARSE_NOTE(DBG_IMPORTANT), "WARNING: 'pipeline_prefetch
on' is deprecated. Please update to use a number.");
+ *var = 1;
+ } else if (!strcmp(token, "off")) {
+ debugs(0, DBG_PARSE_NOTE(2), "WARNING: 'pipeline_prefetch off' is
deprecated. Please update to use '0'.");
+ *var = 0;
+ } else {
+ ConfigParser::strtokFileUndo();
+ parse_int(var);
+ }
+}
+
+#define free_pipelinePrefetch free_int
+#define dump_pipelinePrefetch dump_int
+
static void
dump_refreshpattern(StoreEntry * entry, const char *name, RefreshPattern *
head)
{
=== modified file 'src/cf.data.depend'
--- src/cf.data.depend 2013-05-12 05:04:14 +0000
+++ src/cf.data.depend 2013-05-16 04:00:13 +0000
@@ -51,6 +51,7 @@
onoff
peer
peer_access cache_peer acl
+pipelinePrefetch
PortCfg
QosConfig
refreshpattern
=== modified file 'src/cf.data.pre'
--- src/cf.data.pre 2013-05-14 17:53:18 +0000
+++ src/cf.data.pre 2013-05-16 12:06:02 +0000
@@ -8701,17 +8701,21 @@
DOC_END
NAME: pipeline_prefetch
-TYPE: onoff
-LOC: Config.onoff.pipeline_prefetch
-DEFAULT: off
+TYPE: pipelinePrefetch
+LOC: Config.pipeline_max_prefetch
+DEFAULT: 0
+DEFAULT_DOC: Do not pre-parse pipelines requests.
DOC_START
To boost the performance of pipelined requests to closer
match that of a non-proxied environment Squid can try to fetch
- up to two requests in parallel from a pipeline.
+ several requests in parallel from a pipeline. This sets the
+ maximum number of requests to be fetched in advance.
- Defaults to off for bandwidth management and access logging
+ Defaults to 0 (off) for bandwidth management and access logging
reasons.
+ NOTE: pipelining requires persistent connections to clients.
+
WARNING: pipelining breaks NTLM and Negotiate/Kerberos authentication.
DOC_END
=== modified file 'src/client_side.cc'
--- src/client_side.cc 2013-05-13 03:57:03 +0000
+++ src/client_side.cc 2013-05-16 04:46:38 +0000
@@ -2944,14 +2944,25 @@
}
}
-static int
+static bool
connOkToAddRequest(ConnStateData * conn)
{
- int result = conn->getConcurrentRequestCount() <
(Config.onoff.pipeline_prefetch ? 2 : 1);
-
+ // if client_persistent_connections OFF, then we cannot service any
pipeline >1
+ if (!Config.onoff.client_pconns) {
+ const bool result = (conn->getConcurrentRequestCount() < 1);
+ if (!result)
+ debugs(33, 3, conn->clientConnection << " persistence disabled.
Concurrent requests discarded.");
+ return result;
+ }
+
+ // default to the configured pipeline size.
+ // add 1 because the head of pipeline is counted in concurrent requests
and not prefetch queue
+ const int concurrentRequestLimit = Config.pipeline_max_prefetch + 1;
+
+ const bool result = (conn->getConcurrentRequestCount() <
concurrentRequestLimit);
if (!result) {
- debugs(33, 3, HERE << conn->clientConnection << " max concurrent
requests reached");
- debugs(33, 5, HERE << conn->clientConnection << " defering new request
until one is done");
+ debugs(33, 3, conn->clientConnection << " max concurrent requests
reached (" << concurrentRequestLimit << ")");
+ debugs(33, 5, conn->clientConnection << " defering new request until
one is done");
}
return result;
@@ -2979,7 +2990,7 @@
if (in.notYetUsed == 0)
break;
- /* Limit the number of concurrent requests to 2 */
+ /* Limit the number of concurrent requests */
if (!connOkToAddRequest(this)) {
break;
}
=== modified file 'src/client_side.h'
--- src/client_side.h 2013-04-04 06:15:00 +0000
+++ src/client_side.h 2013-05-16 03:39:48 +0000
@@ -178,7 +178,7 @@
/**
* Manages a connection to a client.
*
- * Multiple requests (up to 2) can be pipelined. This object is responsible
for managing
+ * Multiple requests (up to pipeline_prefetch) can be pipelined. This object
is responsible for managing
* which one is currently being fulfilled and what happens to the queue if the
current one
* causes the client connection to be closed early.
*