On 15/07/2014 4:25 a.m., Alex Rousskov wrote:
> On 07/12/2014 10:45 PM, Amos Jeffries wrote:
>
>> +bool
>> +ConnStateData::findProxyProtocolMagic()
>> +{
>> + // http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt
>> +
>> + // detect and parse PROXY protocol version 1 header
>> + if (in.buf.length() > Proxy10magic.length() &&
>> in.buf.startsWith(Proxy10magic)) {
>> + return parseProxy10();
>> +
>> + // detect and parse PROXY protocol version 2 header
>> + } else if (in.buf.length() > Proxy20magic.length() &&
>> in.buf.startsWith(Proxy20magic)) {
>> + return parseProxy20();
>> +
>> + // detect and terminate other protocols
>> + } else if (in.buf.length() >= Proxy20magic.length()) {
>> + // input other than the PROXY header is a protocol error
>> + return proxyProtocolError("PROXY protocol error: invalid header");
>> + }
>> +
>
> I know you disagree, but the above looks much clearer than the earlier
> code to me. Thank you.
>
> The "// detect and ..." comments are pretty much not needed now because
> the code is self-documenting! The "else"s are also not needed. Your call
> on whether to remove that redundancy.
>
> Consider adding
>
> // TODO: detect short non-magic prefixes earlier to avoid
> // waiting for more data which may never come
>
> but this is not a big deal because most non-malicious clients will not
> send valid non-PROXY requests that is 12 bytes or less, I guess.
>
Added.
>
>> + // XXX: should do this in start(), but SSL/TLS operations begin before
>> start() is called
>
> I agree that this should be done in start(). Fortunately, the code this
> XXX refers to does not rely on job protection (AFAICT) so it is not a
> big deal. The XXX is really about the SSL code that makes the move
> difficult.
>
Where should socket MTU discovery be configured if not at the point
before where the connection actually starts being used?
The constructor where this code block exists is well after accept(), but
also well before socket usage (other than TLS).
>
>>>> -NAME: follow_x_forwarded_for
>>>> +NAME: proxy_forwarded_access follow_x_forwarded_for
>
>>> The new name sounds worse than the old one. Hopefully this can be left
>>> as is or renamed to something better after the "proxy-surrogate" issue
>>> is resolved.
>
>> Is "forwarded_access" obectionable ?
>
> IMO, it is misleading/awkward. Follow_x_forwarded_for was pretty good.
> We can use something like follow_forwarded_client_info or
> trust_relayed_client_info, but let's wait for the primary naming issue
> to be resolved first. That resolution might help us here.
>
>
>>>> +bool
>>>> +ConnStateData::parseProxyProtocolMagic()
>>>
>>> This appears to parse a lot more than just the magic characters. Please
>>> rename to parseProxyProtocolHeader() or similar.
>
>> The entire PROXY protocol is "magic" connection header.
>
> Not really. In this context, "magic" is, roughly, a "rare" string
> constant typically used as a prefix to detect/identify the following
> structure or message. The PROXY protocol header contains both "magical"
> and "regular" parts.
>
>
>> +/**
>> + * Test the connection read buffer for PROXY protocol header.
>> + * Version 1 and 2 header currently supported.
>> + */
>> +bool
>> +ConnStateData::findProxyProtocolMagic()
>
> This method does not just "test". It actually parses. IMO,
> findProxyProtocolMagic() should be called parseProxyProtocolHeader().
>
> No, it does not matter that the actual non-magic parsing code is in
> other parsing methods. The method description and name should reflect
> what the method does from the caller point of view. The method internals
> are not that important when you are naming and describing the interface.
>
Okay, okay. Done.
>
>>>> + needProxyProtocolHeader_ = xact->squidPort->flags.proxySurrogate;
>>>> + if (needProxyProtocolHeader_)
>>>> + proxyProtocolValidateClient(); // will close the connection on
>>>> failure
>>>
>>> Please do not place things that require job protection in a class
>>> constructor. Calling things like stopSending() and stopReceiving() does
>>> not (or will not) work well when we are just constructing a
>>> ConnStateData object. Nobody may notice (at the right time) that you
>>> "stopped" something because nothing has been started yet.
>>
>> Did not require job protection until converting
>> stopSending()/stopReceiving() into mustStop() calls.
>
> It only looked that way, but sooner or later that code would break
> because it was essentially stopping the job inside the job's constructor.
>
>
>> Also, note that ConnStateData is not a true AsyncJob. It never started
>> with AsynJob::Start() and cleanup is equally nasty as this setup
>> constructor (prior to any changes I am adding).
>
> Yes, I know that ConnStateData is an AsyncJob that already violates a
> lot of job API principles. That is not your fault, of course. However,
> adding more violations into the job constructor makes things worse
> (somebody would have to rewrite that later).
>
>
>> I have done as suggested and created the AsyncJob::Start() functionality
>> for it.
>
> Thank you. Hopefully, you would be able to keep that change despite the
> extra work it requires.
>
>
>> However, this means that the PROXY protocol is no longer capable of
>> being used on https_port. The PROXY protocol header comes before TLS
>> negotiation on the wire, but ConnStateData::start() is only called by
>> our code after SSL/TLS negotiation and SSL-bump operations complete.
>
> I agree that there is a problem with https_port support, but the "moving
> PROXY ACL check into start() brakes https_port support for PROXY"
> argument does not work for me: I agree that the SSL code in trunk does a
> lot of things between creating the ConnStateData job and calling
> readSomeData(). However, that was true for the original (mk1) patch as
> well! Moreover, the original (mk1) patch could close the connection in
> the ConnStateData constructor and then proceed with negotiating SSL for
> that connection. Thus, I do not think fixing ConnStateData constructor
> is the source of the htts_port support problems. That support was
> already broken in mk1.
>
> I see two correct ways to address the https_port problem:
>
> 1. Refuse PROXY support on https_port for now. As any support
> limitation, this is unfortunate, but I think it should be accepted. The
> configuration code should be adjusted to reject Squid configurations
> that use proxy-surrogate with an https_port.
Doing this one. I have no desire for re-writing all the SSL negotiation
logics right now.
>
> 2. Support PROXY on https_port. As you have mentioned, this probably
> requires moving SSL negotiations inside the ConnStateData job so that
> the job can be Start()ed when it is created instead of passing the job
> as a POD-like structure during SSL negotiations.
>
>
>> +<p><em>Known Issue:</em> Due to design issues HTTPS traffic is not yet
>> accepted
>> + over this protocol. So use of <em>proxy-surrogate</em> on
>> <em>https_port</em>
>> + is not supported.
>
> If you continue going with #1, please do not blame mysterious "design
> issues" (Squid design? PROXY protocol design? OpenSSL design? Internet
> design?) but simply say that PROXY for https_port is not yet supported.
> There is no design issue here AFAICT. Https_port support just needs more
> development work. This is just an implementation limitation.
They are Squid design issues:
1) creating the ConnStateData Job before negotiating TLS/SSL using
old-style global functions. But only start()ing it after TLS negotiation.
2) sharing a socket between ConnStateData read(2) and OpenSSL
openssl_read(2) operations.
Specifically #2 has issues with ConnStateData reading arbitrary amount
of bytes off the socket into its I/O buffer before identifying the end
of PROXY header. So even if we started the ConnStateData early and
paused for TLS/SSL negotiations later, due to AsyncCall delays between
accept() and ConnStateData::start() an unknown amount of TLS/SSL bytes
may be sucked in.
>
> BTW, when receiving on an https_port, will the PROXY header be
> encrypted? If yes, why not postpone the ACL check until the connection
> is decrypted? And where does the PROXY draft explain/define whether the
> PROXY header should be encrypted?
It is documented as a prefix on the TCP layer payload. So AIUI, the
header is an un-encrypted prefix before the TLS ClientHello.
>
>> +<p>Squid currently supports receiving HTTP via version 1 or 2 of the
>> protocol.
> ...
>
> "receiving HTTP via [PROXY] protocol" sounds awkward to me. The PROXY
> protocol does not envelop or embed the HTTP protocol that follows the
> PROXY header IMO. It just starts a connection with a small header and
> does not have a notion of "PROXY message body".
>
> If "version 1 or 2" is the PROXY protocol version (rather than the HTTP
> version), then the above is inconsistent with the version 1.5 documented
> later:
>
>> +proxy-protocol.txt
>> + Documents Proxy Protocol 1.5, for communicating original client IP
>> + details between consenting proxies and servers even when
>> + transparent interception is taking place.
>
> I think the above ought to say "Documents PROXY Protocol versions 1 and
> 2", just like the draft title.
>
Fixed.
>
>> +<p>PROXY protocol provides a simple way for proxies and tunnels of any kind
>> to
>> + relay the original client source details ...
>
>> +proxy-protocol.txt
>> + Documents Proxy Protocol 1.5, for communicating original client IP
>> + details ...
>
> AFAICT, the PROXY protocol supports a lot more than client IP details.
> It also communicates client source port, transport protocol, and the
> destination address details.
Fixed.
>
>
>> +<sect1>Support PROXY protocol
>
> We only support receiving PROXY protocol; Squid does not support being a
> PROXY protocol client, right? Can we be less ambitious in the above
> claim then? Something like "PROXY protocol support (receiving)" or
> "Support for receiving PROXY protocol header" would work better IMO.
>
>
Done.
>> + HTTP message Forwarded header, or
>> + HTTP message X-Forwarded-For header, or
>> + PROXY protocol connection header.
>
>> + Allowing or Denying the X-Forwarded-For or Forwarded headers to
>> + be followed to find the original source of a request. Or permitting
>> + a client proxy to connect using PROXY protocol.
>
> What happens when the incoming traffic has both an allowed PROXY header
> and an allowed HTTP X-Forwarded-For header? Which takes priority?
PROXY is about the TCP connection and equivalent to NAT on the local
machine.
XFF/Forwarded is about the single message where it resides.
When evaluating the security direct TCP is evaluated first, then PROXY,
then XFF entries.
TCP direct IP
[ PROXY src-IP ]
XFF last entry
[ XFF 2nd to last entry ]
[ ... ]
> What
> if X-Forwarded-For header changes client information in the middle of a
> connection? Please document these cases. I could not find that info in
> the PROXY protocol draft, but perhaps I missed it.
Same thing as if XFF was received on a non-PROXY connection. AFAIK that
changes only the HttpReuqest values for the particular message.
XFF does not change its behaviour in any way due to PROXY protocol
existence. The connection IP is checked for trust then the XFF entries
until one fails. It just happens that the connection IP now comes from
PROXY.
The relation XFF and PROXY have is that the XFF trustworthiness depends
on PROXY being trusted if present. The ACL trust assignment semantics is
identical when handled as diagrammed above. PROXY being a sub-step up
from direct TCP details which also needs to be verified to retain any
trust link between direct TCP client IP and the indirect client IP in XFF.
BTW, this confusion you seem to have between the two is exactly why I am
trying to rename follow_x_forwarded_for - since it does not necessarily
relate to XFF header when evaluating trust of PROXY protocol. And
certainly won't when we upgrade to only supporting Forwarded: header.
>
> Please adjust the documentation to make it clear whether the features
> are exclusive (either/or) or can co-exist (and/or).
>
>
>> + proxy-surrogate
>> + Support for PROXY protocol version 1 or 2 connections.
>> + The proxy_forwarded_access is required to whitelist
>> + downstream proxies which can be trusted.
>> +
>
> What happens to the destination information in the PROXY header? Is it
> used in any way? Should we mention how it is used (or the fact that it
> is not used)?
If we can trust the source then we use the PROXY protocol header as per
PROXY protocol. Do we really need to enumerate whole chapters of the
protocol spec in the documentation of its config option?
For example, I dont see anything from RFC 2616 or 7230 documenting
"accel" or anything documenting TLS handshakes for ssl-bump.
If we need detailed long descriptions we have the wiki and/or the PROXY
spec document itself.
>
> Consider s/downstream/client/.
>
>
>> + /** marks ports receiving PROXY protocol traffic
>> + *
>> + * Indicating the following are required:
>> + * - PROXY protocol magic header
>> + * - src/dst IP retrieved from magic PROXY header
>> + * - reverse-proxy traffic prohibited
>> + * - intercepted traffic prohibited
>> + */
>> + bool proxySurrogate;
>
> Why is reverse-proxy traffic prohibited? The PROXY protocol draft
> mentions reverse proxies being used, but it is not clear to me whether
> they are only supported as PROXY protocol clients. I do not know all the
> PROXY details, but it seems to me that reverse proxies should be allowed
> as servers (AFAICT, the explicit configuration requirement is the key
> here, not the reverse/forward role of the proxy!).
Hmm. Good point.
I'm removing those prohibitions and replacing with just an implicit
no-spoofing for TPROXY.
>
>>>> + debugs(33, 5, "PROXY protocol on connection " <<
>>>> clientConnection);
>>>> + clientConnection->local = originalDest;
>>>> + clientConnection->remote = originalClient;
>>>> + debugs(33, 5, "PROXY upgrade: " << clientConnection);
>>>
>>> We use this kind of address resetting code in many places, right? Please
>>> encapsulate it (together with the debugging) into a
>>> Connection::resetAddrs() or a similar method.
>>
>> Two. PROXY/1.0 and PROXY/2.0 parsers.
>
>
> I found more, including:
>
>> ./log/TcpLogger.cc: futureConn->remote = remote;
>> ./log/TcpLogger.cc- futureConn->local.setAnyAddr();
The local setup contains a setAnyAddr() and setIPv4() conditional
optimizations. Using a copy requires adding an otherwise needless local
variable allocation.
>
>> ./ftp.cc- conn->local = ftpState->ctrl.conn->local;
>> ./ftp.cc- conn->local.port(0);
>> ./ftp.cc: conn->remote = ftpState->ctrl.conn->remote;
>> ./ftp.cc- conn->remote.port(port);
>
>> ./ftp.cc- conn->local = ftpState->ctrl.conn->local;
>> ./ftp.cc- conn->local.port(0);
>> ./ftp.cc: conn->remote = ipaddr;
>> ./ftp.cc- conn->remote.port(port);
>
>> ./dns_internal.cc- conn->local = Config.Addrs.udp_incoming;
>> ./dns_internal.cc-
>> ./dns_internal.cc: conn->remote = nameservers[nsv].S;
>
>
Okay. Adding setAddrs(local, remote) to trunk since it is pure scope creep.
>
>>> * When, in a misconfigured setup, somebody sends a PROXY header to a
>>> regular Squid HTTP port, does the Squid error look obvious/clear enough?
>>> Or will the admin have a hard time understanding why things do not work
>>> in that case?
>>>
>>
>> Trunk will die with a 400 error quoting the PROXY header as the
>> "URL" or buffer content.
>
> By "die", you mean respond with an HTTP 400 error, right?
>
Er, yes. And close TCP connection.
>
>> It seems clear enough to
>> me not to need new code for that type of config error case.
>
> I disagree because admins do not normally see 400 errors (until users
> start complaining). Most see log messages though. Please note that in a
> multi-port setup, it may not be obvious that something is misconfigured
> because there may be few or no 400 errors when Squid starts serving
> requests.
>
> There are two kinds of likely misconfigurations:
>
> 1) Client sends PROXY. Squid is not configured to receive one. We could
> handle this better, but I do not insist on adding code to detect and
> warn about such cases because the required code would be relatively
> complex, and because it would make it even easier to spam cache.log with
> error messages when there is no misconfiguration at all (unless we add
> even more code).
>
> 2) Client does not send PROXY. Squid is configured to require one. I
> think we should announce such cases in cache.log. There is already code
> to detect this problem. AFAICT, we just need to a logging line.
>
Added a 33,2 level display of the error message and connection details.
I think higher levels would potentially be too noisy. Terminating
connections on error is a routine part of PROXY.
If we need more complex debugging of this we should probably allocate a
debug section to it. But for now 33 seems fine.
Amos
=== modified file 'doc/release-notes/release-3.5.sgml'
--- doc/release-notes/release-3.5.sgml 2014-07-13 05:28:15 +0000
+++ doc/release-notes/release-3.5.sgml 2014-07-25 12:05:51 +0000
@@ -26,40 +26,41 @@
<sect1>Known issues
<p>
Although this release is deemed good enough for use in many setups, please
note the existence of
<url
url="http://bugs.squid-cache.org/buglist.cgi?query_format=advanced&product=Squid&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&version=3.5"
name="open bugs against Squid-3.5">.
<sect1>Changes since earlier releases of Squid-3.5
<p>
The 3.5 change history can be <url
url="http://www.squid-cache.org/Versions/v3/3.5/changesets/" name="viewed
here">.
<sect>Major new features since Squid-3.4
<p>Squid 3.5 represents a new feature release above 3.4.
<p>The most important of these new features are:
<itemize>
<item>Support libecap v1.0
<item>Authentication helper query extensions
<item>Support named services
<item>Upgraded squidclient tool
<item>Helper support for concurrency channels
+ <item>Receive PROXY protocol, Versions 1 & 2
</itemize>
Most user-facing changes are reflected in squid.conf (see below).
<sect1>Support libecap v1.0
<p>Details at <url url="http://wiki.squid-cache.org/Features/BLAH">.
<p>The new libecap version allows Squid to better check the version of
the eCAP adapter being loaded as well as the version of the eCAP library
being used.
<p>Squid-3.5 can support eCAP adapters built with libecap v1.0,
but no longer supports adapters built with earlier libecap versions
due to API changes.
<sect1>Authentication helper query extensions
<p>Details at <url url="http://www.squid-cache.org/Doc/config/auth_param/">.
@@ -146,71 +147,111 @@
The default is to use X.509 certificate encryption instead.
<p>When performing TLS/SSL server certificates are always verified, the
results shown at debug level 3. The encrypted type is displayed at debug
level 2 and the connection is used to send and receive the messages
regardless of verification results.
<sect1>Helper support for concurrency channels
<p>Helper concurrency greatly reduces the communication lag between Squid
and its helpers allowing faster transaction speeds even on sequential
helpers.
<p>The Digest authentication, Store-ID, and URL-rewrite helpers packaged
with Squid have been updated to support concurrency channels. They will
auto-detect the <em>channel-ID</em> field and will produce the appropriate
response format.
With these helpers concurrency may now be set to 0 or any higher number as
desired.
+<sect1>Receive PROXY protocol, Versions 1 & 2
+<p>More info at <url
url="http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt">
+
+<p>PROXY protocol provides a simple way for proxies and tunnels of any kind to
+ relay the original client source details without having to alter or
understand
+ the protocol being relayed on the connection.
+
+<p>Squid currently supports receiving HTTP traffic from a client proxy using
this protocol.
+ An http_port which has been configured to receive this protocol may only be
used to
+ receive traffic from client software sending in this protocol.
+ Regular forward-proxy HTTP traffic is not accepted.
+
+<p>The <em>accel</em> and <em>intercept</em> options are still used to
identify the
+ traffic syntax being delivered by the client proxy.
+
+<p>Squid can be configured by adding an <em>http_port</em>
+ with the <em>proxy-surrogate</em> mode flag. The
<em>proxy_forwarded_access</em>
+ must also be configured with <em>src</em> ACLs to whitelist proxies which
are
+ trusted to send correct client details.
+
+<p>Forward-proxy traffic from a client proxy:
+<verbatim>
+ http_port 3128 proxy-surrogate
+ proxy_forwarded_access allow localhost
+</verbatim>
+
+<p>Intercepted traffic from a client proxy or tunnel:
+<verbatim>
+ http_port 3128 intercept proxy-surrogate
+ proxy_forwarded_access allow localhost
+</verbatim>
+
+<p><em>Known Issue:</em>
+ Use of <em>proxy-surrogate</em> on <em>https_port</em> is not supported.
+
+
<sect>Changes to squid.conf since Squid-3.4
<p>
There have been changes to Squid's configuration file since Squid-3.4.
<p>Squid supports reading configuration option parameters from external
files using the syntax <em>parameters("/path/filename")</em>. For example:
<verb>
acl whitelist dstdomain parameters("/etc/squid/whitelist.txt")
</verb>
<p>The squid.conf macro ${service_name} is added to provide the service name
of the process parsing the config.
<p>There have also been changes to individual directives in the config file.
This section gives a thorough account of those changes in three categories:
<itemize>
<item><ref id="newtags" name="New tags">
<item><ref id="modifiedtags" name="Changes to existing tags">
<item><ref id="removedtags" name="Removed tags">
</itemize>
<p>
<sect1>New tags<label id="newtags">
<p>
<descrip>
<tag>collapsed_forwarding</tag>
<p>Ported from Squid-2 with no configuration or visible behaviour
changes.
Collapsing of requests is performed across SMP workers.
+ <tag>proxy_forwarded_access</tag>
+ <p>Renamed from <em>follow_x_forwarded_for</em> and extended to control
more
+ ways for locating the indirect (original) client IP details.
+
<tag>send_hit</tag>
<p>New configuration directive to enable/disable sending cached content
based on ACL selection. ACL can be based on client request or cached
response details.
<tag>sslproxy_session_cache_size</tag>
<p>New directive which sets the cache size to use for TLS/SSL sessions
cache.
<tag>sslproxy_session_ttl</tag>
<p>New directive to specify the time in seconds the TLS/SSL session is
valid.
<tag>store_id_extras</tag>
<p>New directive to send additional lookup parameters to the configured
Store-ID helper program. It takes a string which may contain
logformat %macros.
<p>The Store-ID helper input format is now:
<verb>
[channel-ID] url [extras]
</verb>
<p>The default value for extras is: "%>a/%>A %un %>rm myip=%la
myport=%lp"
@@ -259,75 +300,80 @@
<p>These connections differ from HTTP persistent connections in that
they
have not been used for HTTP messaging (and may never be). They may be
turned into persistent connections after their first use subject to
the
same keep-alive critera any HTTP connection is checked for.
<tag>forward_max_tries</tag>
<p>Default value increased to <em>25 destinations</em> to allow better
contact and IPv4 failover with domains using long lists of IPv6
addresses.
<tag>ftp_epsv</tag>
<p>Converted into an Access List with allow/deny value driven by ACLs
using Squid standard first line wins matching basis.
<p>The old values of <em>on</em> and <em>off</em> imply <em>allow
all</em>
and <em>deny all</em> respectively and are now deprecated.
Do not combine use of on/off values with ACL configuration.
<tag>http_port</tag>
<p><em>protocol=</em> option altered to accept protocol version details.
Currently supported values are: HTTP, HTTP/1.1, HTTPS, HTTPS/1.1
+ <p><em>New option <em>proxy-surrogate</em> to mark ports receiving PROXY
+ protocol version 1 or 2 traffic.
<tag>https_port</tag>
<p><em>protocol=</em> option altered to accept protocol version details.
Currently supported values are: HTTP, HTTP/1.1, HTTPS, HTTPS/1.1
<tag>logformat</tag>
<p>New format code <em>%credentials</em> to log the client credentials
token.
<p>New format code <em>%tS</em> to log transaction start time in
"seconds.milliseconds" format, similar to the existing access.log
"current time" field (%ts.%03tu) which logs the corresponding
transaction finish time.
</descrip>
<sect1>Removed tags<label id="removedtags">
<p>
<descrip>
<tag>cache_dir</tag>
<p><em>COSS</em> storage type is formally replaced by Rock storage type.
<tag>cache_dns_program</tag>
<p>DNS external helper interface has been removed. It was no longer
able to provide high performance service and the internal DNS
client library with multicast DNS cover all modern use-cases.
<tag>cache_peer</tag>
<p><em>idle=</em> replaced by <em>standby=</em>.
<p>NOTE that standby connections are started earlier and available in
more circumstances than squid-2 idle connections were. They are
also spread over all IPs of the peer.
<tag>dns_children</tag>
<p>DNS external helper interface has been removed.
+ <tag>follow_x_forwarded_for</tag>
+ <p>Renamed <em>proxy_forwarded_access</em> and extended.
+
</descrip>
<sect>Changes to ./configure options since Squid-3.4
<p>
There have been some changes to Squid's build configuration since Squid-3.4.
This section gives an account of those changes in three categories:
<itemize>
<item><ref id="newoptions" name="New options">
<item><ref id="modifiedoptions" name="Changes to existing options">
<item><ref id="removedoptions" name="Removed options">
</itemize>
<sect1>New options<label id="newoptions">
<p>
<descrip>
<tag>BUILDCXX=</tag>
=== modified file 'doc/rfc/1-index.txt'
--- doc/rfc/1-index.txt 2014-06-09 01:38:06 +0000
+++ doc/rfc/1-index.txt 2014-07-25 09:18:15 +0000
@@ -1,40 +1,43 @@
draft-ietf-radext-digest-auth-06.txt
RADIUS Extension for Digest Authentication
A proposed extension to Radius for Digest authentication
via RADIUS servers.
draft-cooper-webi-wpad-00.txt
draft-ietf-svrloc-wpad-template-00.txt
Web Proxy Auto-Discovery Protocol -- WPAD
documents how MSIE and several other browsers automatically
find their proxy settings from DHCP and/or DNS
draft-forster-wrec-wccp-v1-00.txt
WCCP 1.0
draft-wilson-wccp-v2-12-oct-2001.txt
WCCP 2.0
draft-vinod-carp-v1-03.txt
Microsoft CARP peering algorithm
+proxy-protocol.txt
+ The PROXY protocol, Versions 1 & 2
+
rfc0959.txt
FTP
rfc1035.txt
DNS for IPv4
rfc1157.txt
A Simple Network Management Protocol (SNMP)
SNMP v1 Specification. SNMP v2 is documented in several RFCs,
namely, 1902,1903,1904,1905,1906,1907.
rfc1738.txt
Uniform Resource Locators (URL)
(updated by RFC 3986, but not obsoleted)
rfc1902.txt
Structure of Managament Information (SMI) for SNMPv2
Management information is viewed as a collection of managed objects,
the Management Information Base (MIB). MIB modules are
written using an adapted subset of OSI's Abstract Syntax
=== modified file 'src/Makefile.am'
--- src/Makefile.am 2014-07-23 12:51:55 +0000
+++ src/Makefile.am 2014-07-25 12:05:51 +0000
@@ -1609,40 +1609,41 @@
acl/libapi.la \
base/libbase.la \
libsquid.la \
ip/libip.la \
fs/libfs.la \
comm/libcomm.la \
eui/libeui.la \
icmp/libicmp.la icmp/libicmp-core.la \
log/liblog.la \
format/libformat.la \
$(REPL_OBJS) \
$(DISK_LIBS) \
$(DISK_OS_LIBS) \
$(ADAPTATION_LIBS) \
$(ESI_LIBS) \
$(SSL_LIBS) \
anyp/libanyp.la \
ipc/libipc.la \
mgr/libmgr.la \
$(SNMP_LIBS) \
+ parser/libsquid-parser.la \
$(top_builddir)/lib/libmisccontainers.la \
$(top_builddir)/lib/libmiscencoding.la \
$(top_builddir)/lib/libmiscutil.la \
$(NETTLELIB) \
$(REGEXLIB) \
$(SQUID_CPPUNIT_LIBS) \
$(SQUID_CPPUNIT_LA) \
$(SSLLIB) \
$(KRB5LIBS) \
$(COMPAT_LIB) \
$(XTRA_LIBS)
tests_testCacheManager_LDFLAGS = $(LIBADD_DL)
tests_testCacheManager_DEPENDENCIES = \
$(REPL_OBJS) \
$(SQUID_CPPUNIT_LA)
tests_testDiskIO_SOURCES = \
CacheDigest.h \
tests/stub_CacheDigest.cc \
cbdata.cc \
@@ -2037,40 +2038,41 @@
$(DISKIO_GEN_SOURCE)
tests_testEvent_LDADD = \
http/libsquid-http.la \
ident/libident.la \
acl/libacls.la \
acl/libstate.la \
acl/libapi.la \
base/libbase.la \
libsquid.la \
ip/libip.la \
fs/libfs.la \
anyp/libanyp.la \
icmp/libicmp.la icmp/libicmp-core.la \
comm/libcomm.la \
log/liblog.la \
format/libformat.la \
$(REPL_OBJS) \
$(ADAPTATION_LIBS) \
$(ESI_LIBS) \
$(SSL_LIBS) \
+ parser/libsquid-parser.la \
$(top_builddir)/lib/libmisccontainers.la \
$(top_builddir)/lib/libmiscencoding.la \
$(top_builddir)/lib/libmiscutil.la \
$(DISK_LIBS) \
$(DISK_OS_LIBS) \
ipc/libipc.la \
mgr/libmgr.la \
$(SNMP_LIBS) \
$(NETTLELIB) \
$(REGEXLIB) \
$(SQUID_CPPUNIT_LIBS) \
$(SQUID_CPPUNIT_LA) \
$(SSLLIB) \
$(KRB5LIBS) \
$(COMPAT_LIB) \
$(XTRA_LIBS)
tests_testEvent_LDFLAGS = $(LIBADD_DL)
tests_testEvent_DEPENDENCIES = \
$(REPL_OBJS) \
$(SQUID_CPPUNIT_LA)
@@ -2287,40 +2289,41 @@
$(DISKIO_GEN_SOURCE)
tests_testEventLoop_LDADD = \
http/libsquid-http.la \
ident/libident.la \
acl/libacls.la \
acl/libstate.la \
acl/libapi.la \
base/libbase.la \
libsquid.la \
ip/libip.la \
fs/libfs.la \
anyp/libanyp.la \
icmp/libicmp.la icmp/libicmp-core.la \
comm/libcomm.la \
log/liblog.la \
format/libformat.la \
$(REPL_OBJS) \
$(ADAPTATION_LIBS) \
$(ESI_LIBS) \
$(SSL_LIBS) \
+ parser/libsquid-parser.la \
$(top_builddir)/lib/libmisccontainers.la \
$(top_builddir)/lib/libmiscencoding.la \
$(top_builddir)/lib/libmiscutil.la \
$(DISK_LIBS) \
$(DISK_OS_LIBS) \
ipc/libipc.la \
mgr/libmgr.la \
$(SNMP_LIBS) \
$(NETTLELIB) \
$(REGEXLIB) \
$(SQUID_CPPUNIT_LIBS) \
$(SQUID_CPPUNIT_LA) \
$(SSLLIB) \
$(KRB5LIBS) \
$(COMPAT_LIB) \
$(XTRA_LIBS)
tests_testEventLoop_LDFLAGS = $(LIBADD_DL)
tests_testEventLoop_DEPENDENCIES = \
$(REPL_OBJS) \
$(SQUID_CPPUNIT_LA)
@@ -2535,40 +2538,41 @@
acl/libstate.la \
acl/libapi.la \
libsquid.la \
ip/libip.la \
fs/libfs.la \
anyp/libanyp.la \
icmp/libicmp.la icmp/libicmp-core.la \
comm/libcomm.la \
log/liblog.la \
format/libformat.la \
$(REPL_OBJS) \
$(DISK_LIBS) \
$(DISK_OS_LIBS) \
$(ADAPTATION_LIBS) \
$(ESI_LIBS) \
$(SSL_LIBS) \
ipc/libipc.la \
base/libbase.la \
mgr/libmgr.la \
$(SNMP_LIBS) \
+ parser/libsquid-parser.la \
$(top_builddir)/lib/libmisccontainers.la \
$(top_builddir)/lib/libmiscencoding.la \
$(top_builddir)/lib/libmiscutil.la \
$(NETTLELIB) \
$(REGEXLIB) \
$(SQUID_CPPUNIT_LIBS) \
$(SQUID_CPPUNIT_LA) \
$(SSLLIB) \
$(KRB5LIBS) \
$(COMPAT_LIB) \
$(XTRA_LIBS)
tests_test_http_range_LDFLAGS = $(LIBADD_DL)
tests_test_http_range_DEPENDENCIES = \
$(SQUID_CPPUNIT_LA)
tests_testHttpParser_SOURCES = \
Debug.h \
HttpParser.cc \
HttpParser.h \
MemBuf.cc \
@@ -2825,40 +2829,41 @@
acl/libacls.la \
acl/libstate.la \
acl/libapi.la \
libsquid.la \
ip/libip.la \
fs/libfs.la \
$(SSL_LIBS) \
ipc/libipc.la \
base/libbase.la \
mgr/libmgr.la \
anyp/libanyp.la \
$(SNMP_LIBS) \
icmp/libicmp.la icmp/libicmp-core.la \
comm/libcomm.la \
log/liblog.la \
format/libformat.la \
http/libsquid-http.la \
$(REPL_OBJS) \
$(ADAPTATION_LIBS) \
$(ESI_LIBS) \
+ parser/libsquid-parser.la \
$(top_builddir)/lib/libmisccontainers.la \
$(top_builddir)/lib/libmiscencoding.la \
$(top_builddir)/lib/libmiscutil.la \
$(DISK_OS_LIBS) \
$(NETTLELIB) \
$(REGEXLIB) \
$(SQUID_CPPUNIT_LIBS) \
$(SQUID_CPPUNIT_LA) \
$(SSLLIB) \
$(KRB5LIBS) \
$(COMPAT_LIB) \
$(XTRA_LIBS)
tests_testHttpRequest_LDFLAGS = $(LIBADD_DL)
tests_testHttpRequest_DEPENDENCIES = \
$(REPL_OBJS) \
$(SQUID_CPPUNIT_LA)
## why so many sources? well httpHeaderTools requites ACLChecklist & friends.
## first line - what we are testing.
tests_testStore_SOURCES= \
@@ -3669,40 +3674,41 @@
eui/libeui.la \
acl/libstate.la \
acl/libapi.la \
base/libbase.la \
libsquid.la \
ip/libip.la \
fs/libfs.la \
$(SSL_LIBS) \
ipc/libipc.la \
mgr/libmgr.la \
$(SNMP_LIBS) \
icmp/libicmp.la icmp/libicmp-core.la \
comm/libcomm.la \
log/liblog.la \
$(DISK_OS_LIBS) \
format/libformat.la \
$(REGEXLIB) \
$(REPL_OBJS) \
$(ADAPTATION_LIBS) \
$(ESI_LIBS) \
+ parser/libsquid-parser.la \
$(top_builddir)/lib/libmisccontainers.la \
$(top_builddir)/lib/libmiscencoding.la \
$(top_builddir)/lib/libmiscutil.la \
$(NETTLELIB) \
$(COMPAT_LIB) \
$(SQUID_CPPUNIT_LIBS) \
$(SQUID_CPPUNIT_LA) \
$(SSLLIB) \
$(KRB5LIBS) \
$(COMPAT_LIB) \
$(XTRA_LIBS)
tests_testURL_LDFLAGS = $(LIBADD_DL)
tests_testURL_DEPENDENCIES = \
$(REPL_OBJS) \
$(SQUID_CPPUNIT_LA)
tests_testSBuf_SOURCES= \
tests/testSBuf.h \
tests/testSBuf.cc \
tests/testMain.cc \
=== modified file 'src/anyp/TrafficMode.h'
--- src/anyp/TrafficMode.h 2013-02-04 09:47:50 +0000
+++ src/anyp/TrafficMode.h 2014-07-25 06:12:42 +0000
@@ -8,40 +8,50 @@
* Set of 'mode' flags defining types of trafic which can be received.
*
* Use to determine the processing steps which need to be applied
* to this traffic under any special circumstances which may apply.
*/
class TrafficMode
{
public:
TrafficMode() : accelSurrogate(false), natIntercept(false),
tproxyIntercept(false), tunnelSslBumping(false) {}
TrafficMode(const TrafficMode &rhs) { operator =(rhs); }
TrafficMode &operator =(const TrafficMode &rhs) { memcpy(this, &rhs,
sizeof(TrafficMode)); return *this; }
/** marks HTTP accelerator (reverse/surrogate proxy) traffic
*
* Indicating the following are required:
* - URL translation from relative to absolute form
* - restriction to origin peer relay recommended
*/
bool accelSurrogate;
+ /** marks ports receiving PROXY protocol traffic
+ *
+ * Indicating the following are required:
+ * - PROXY protocol magic header
+ * - src/dst IP retrieved from magic PROXY header
+ * - indirect client IP trust verification is mandatory
+ * - TLS is not supported
+ */
+ bool proxySurrogate;
+
/** marks NAT intercepted traffic
*
* Indicating the following are required:
* - NAT lookups
* - URL translation from relative to absolute form
* - Same-Origin verification is mandatory
* - destination pinning is recommended
* - authentication prohibited
*/
bool natIntercept;
/** marks TPROXY intercepted traffic
*
* Indicating the following are required:
* - src/dst IP inversion must be performed
* - client IP should be spoofed if possible
* - URL translation from relative to absolute form
* - Same-Origin verification is mandatory
* - destination pinning is recommended
* - authentication prohibited
=== modified file 'src/cache_cf.cc'
--- src/cache_cf.cc 2014-07-21 14:55:27 +0000
+++ src/cache_cf.cc 2014-07-25 12:05:51 +0000
@@ -3581,45 +3581,53 @@
} else if (strcmp(token, "transparent") == 0 || strcmp(token, "intercept")
== 0) {
if (s->flags.accelSurrogate || s->flags.tproxyIntercept) {
debugs(3, DBG_CRITICAL, "FATAL: http(s)_port: Intercept mode
requires its own interception port. It cannot be shared with other modes.");
self_destruct();
}
s->flags.natIntercept = true;
Ip::Interceptor.StartInterception();
/* Log information regarding the port modes under interception. */
debugs(3, DBG_IMPORTANT, "Starting Authentication on port " << s->s);
debugs(3, DBG_IMPORTANT, "Disabling Authentication on port " << s->s
<< " (interception enabled)");
} else if (strcmp(token, "tproxy") == 0) {
if (s->flags.natIntercept || s->flags.accelSurrogate) {
debugs(3,DBG_CRITICAL, "FATAL: http(s)_port: TPROXY option
requires its own interception port. It cannot be shared with other modes.");
self_destruct();
}
s->flags.tproxyIntercept = true;
Ip::Interceptor.StartTransparency();
/* Log information regarding the port modes under transparency. */
debugs(3, DBG_IMPORTANT, "Disabling Authentication on port " << s->s
<< " (TPROXY enabled)");
+ if (s->flags.proxySurrogate) {
+ debugs(3, DBG_IMPORTANT, "Disabling TPROXY Spoofing on port " <<
s->s << " (proxy-surrogate enabled)");
+ }
+
if (!Ip::Interceptor.ProbeForTproxy(s->s)) {
debugs(3, DBG_CRITICAL, "FATAL: http(s)_port: TPROXY support in
the system does not work.");
self_destruct();
}
+ } else if (strcmp(token, "proxy-surrogate") == 0) {
+ s->flags.proxySurrogate = true;
+ debugs(3, DBG_IMPORTANT, "Disabling TPROXY Spoofing on port " << s->s
<< " (proxy-surrogate enabled)");
+
} else if (strncmp(token, "defaultsite=", 12) == 0) {
if (!s->flags.accelSurrogate) {
debugs(3, DBG_CRITICAL, "FATAL: http(s)_port: defaultsite option
requires Acceleration mode flag.");
self_destruct();
}
safe_free(s->defaultsite);
s->defaultsite = xstrdup(token + 12);
} else if (strcmp(token, "vhost") == 0) {
if (!s->flags.accelSurrogate) {
debugs(3, DBG_CRITICAL, "WARNING: http(s)_port: vhost option is
deprecated. Use 'accel' mode flag instead.");
}
s->flags.accelSurrogate = true;
s->vhost = true;
} else if (strcmp(token, "no-vhost") == 0) {
if (!s->flags.accelSurrogate) {
debugs(3, DBG_IMPORTANT, "ERROR: http(s)_port: no-vhost option
requires Acceleration mode flag.");
}
s->vhost = false;
} else if (strcmp(token, "vport") == 0) {
if (!s->flags.accelSurrogate) {
@@ -3783,84 +3791,91 @@
self_destruct();
return;
}
char *token = ConfigParser::NextToken();
if (!token) {
self_destruct();
return;
}
AnyP::PortCfgPointer s = new AnyP::PortCfg();
s->setTransport(protocol);
parsePortSpecification(s, token);
/* parse options ... */
while ((token = ConfigParser::NextToken())) {
parse_port_option(s, token);
}
-#if USE_OPENSSL
if (s->transport.protocol == AnyP::PROTO_HTTPS) {
+#if USE_OPENSSL
/* ssl-bump on https_port configuration requires either tproxy or
intercept, and vice versa */
const bool hijacked = s->flags.isIntercepted();
if (s->flags.tunnelSslBumping && !hijacked) {
debugs(3, DBG_CRITICAL, "FATAL: ssl-bump on https_port requires
tproxy/intercept which is missing.");
self_destruct();
}
if (hijacked && !s->flags.tunnelSslBumping) {
debugs(3, DBG_CRITICAL, "FATAL: tproxy/intercept on https_port
requires ssl-bump which is missing.");
self_destruct();
}
- }
#endif
+ if (s->transport.protocol == AnyP::PROTO_HTTPS) {
+ debugs(3,DBG_CRITICAL, "FATAL: https_port: proxy-surrogate option
cannot be used on HTTPS ports.");
+ self_destruct();
+ }
+ }
if (Ip::EnableIpv6&IPV6_SPECIAL_SPLITSTACK && s->s.isAnyAddr()) {
// clone the port options from *s to *(s->next)
s->next = s->clone();
s->next->s.setIPv4();
debugs(3, 3, AnyP::UriScheme(s->transport.protocol).c_str() << "_port:
clone wildcard address for split-stack: " << s->s << " and " << s->next->s);
}
while (*head != NULL)
head = &((*head)->next);
*head = s;
}
static void
dump_generic_port(StoreEntry * e, const char *n, const AnyP::PortCfgPointer &s)
{
char buf[MAX_IPSTRLEN];
storeAppendPrintf(e, "%s %s",
n,
s->s.toUrl(buf,MAX_IPSTRLEN));
// MODES and specific sub-options.
if (s->flags.natIntercept)
storeAppendPrintf(e, " intercept");
else if (s->flags.tproxyIntercept)
storeAppendPrintf(e, " tproxy");
+ else if (s->flags.proxySurrogate)
+ storeAppendPrintf(e, " proxy-surrogate");
+
else if (s->flags.accelSurrogate) {
storeAppendPrintf(e, " accel");
if (s->vhost)
storeAppendPrintf(e, " vhost");
if (s->vport < 0)
storeAppendPrintf(e, " vport");
else if (s->vport > 0)
storeAppendPrintf(e, " vport=%d", s->vport);
if (s->defaultsite)
storeAppendPrintf(e, " defaultsite=%s", s->defaultsite);
// TODO: compare against prefix of 'n' instead of assuming http_port
if (s->transport.protocol != AnyP::PROTO_HTTP)
storeAppendPrintf(e, " protocol=%s",
AnyP::UriScheme(s->transport.protocol).c_str());
if (s->allow_direct)
storeAppendPrintf(e, " allow-direct");
=== modified file 'src/cf.data.pre'
--- src/cf.data.pre 2014-07-21 14:55:27 +0000
+++ src/cf.data.pre 2014-07-25 12:05:51 +0000
@@ -1077,49 +1077,57 @@
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged)
machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
NOCOMMENT_END
DOC_END
-NAME: follow_x_forwarded_for
+NAME: proxy_forwarded_access follow_x_forwarded_for
TYPE: acl_access
-IFDEF: FOLLOW_X_FORWARDED_FOR
LOC: Config.accessList.followXFF
DEFAULT_IF_NONE: deny all
-DEFAULT_DOC: X-Forwarded-For header will be ignored.
+DEFAULT_DOC: indirect client IP will not be accepted.
DOC_START
- Allowing or Denying the X-Forwarded-For header to be followed to
- find the original source of a request.
+ Determine which client proxies can be trusted to provide correct
+ information regarding real client IP address.
+
+ The original source details can be relayed in:
+ HTTP message Forwarded header, or
+ HTTP message X-Forwarded-For header, or
+ PROXY protocol connection header.
+
+ Allowing or Denying the X-Forwarded-For or Forwarded headers to
+ be followed to find the original source of a request. Or permitting
+ a client proxy to connect using PROXY protocol.
Requests may pass through a chain of several other proxies
before reaching us. The X-Forwarded-For header will contain a
comma-separated list of the IP addresses in the chain, with the
rightmost address being the most recent.
If a request reaches us from a source that is allowed by this
configuration item, then we consult the X-Forwarded-For header
to see where that host received the request from. If the
X-Forwarded-For header contains multiple addresses, we continue
backtracking until we reach an address for which we are not allowed
to follow the X-Forwarded-For header, or until we reach the first
address in the list. For the purpose of ACL used in the
follow_x_forwarded_for directive the src ACL type always matches
the address we are testing and srcdomain matches its rDNS.
The end result of this process is an IP address that we will
refer to as the indirect client address. This address may
be treated as the client address for access control, ICAP, delay
pools and logging, depending on the acl_uses_indirect_client,
@@ -1704,40 +1712,45 @@
always disable always PMTU discovery.
In many setups of transparently intercepting proxies
Path-MTU discovery can not work on traffic towards the
clients. This is the case when the intercepting device
does not fully track connections and fails to forward
ICMP must fragment messages to the cache server. If you
have such setup and experience that certain clients
sporadically hang or never complete requests set
disable-pmtu-discovery option to 'transparent'.
name= Specifies a internal name for the port. Defaults to
the port specification (port or addr:port)
tcpkeepalive[=idle,interval,timeout]
Enable TCP keepalive probes of idle connections.
In seconds; idle is the initial time before TCP starts
probing the connection, interval how often to probe, and
timeout the time before giving up.
+ proxy-surrogate
+ Require PROXY protocol version 1 or 2 connections.
+ The proxy_forwarded_access is required to whitelist
+ downstream proxies which can be trusted.
+
If you run Squid on a dual-homed machine with an internal
and an external interface we recommend you to specify the
internal address:port in http_port. This way Squid will only be
visible on the internal address.
NOCOMMENT_START
# Squid normally listens to port 3128
http_port @DEFAULT_HTTP_PORT@
NOCOMMENT_END
DOC_END
NAME: https_port
IFDEF: USE_OPENSSL
TYPE: PortCfg
DEFAULT: none
LOC: HttpsPortList
DOC_START
Usage: [ip:]port cert=certificate.pem [key=key.pem] [mode] [options...]
=== modified file 'src/client_side.cc'
--- src/client_side.cc 2014-07-16 12:10:11 +0000
+++ src/client_side.cc 2014-07-25 12:05:51 +0000
@@ -102,40 +102,41 @@
#include "fd.h"
#include "fde.h"
#include "fqdncache.h"
#include "FwdState.h"
#include "globals.h"
#include "http.h"
#include "HttpHdrContRange.h"
#include "HttpHeaderTools.h"
#include "HttpReply.h"
#include "HttpRequest.h"
#include "ident/Config.h"
#include "ident/Ident.h"
#include "internal.h"
#include "ipc/FdNotes.h"
#include "ipc/StartListening.h"
#include "log/access_log.h"
#include "Mem.h"
#include "MemBuf.h"
#include "MemObject.h"
#include "mime_header.h"
+#include "parser/Tokenizer.h"
#include "profiler/Profiler.h"
#include "rfc1738.h"
#include "SquidConfig.h"
#include "SquidTime.h"
#include "StatCounters.h"
#include "StatHist.h"
#include "Store.h"
#include "TimeOrTag.h"
#include "tools.h"
#include "URL.h"
#if USE_AUTH
#include "auth/UserRequest.h"
#endif
#if USE_DELAY_POOLS
#include "ClientInfo.h"
#endif
#if USE_OPENSSL
#include "ssl/context_storage.h"
#include "ssl/gadgets.h"
@@ -2322,40 +2323,42 @@
#if THIS_VIOLATES_HTTP_SPECS_ON_URL_TRANSFORMATION
if ((t = strchr(url, '#'))) /* remove HTML anchors */
*t = '\0';
#endif
debugs(33,5, HERE << "repare absolute URL from " <<
(csd->transparent()?"intercept":(csd->port->flags.accelSurrogate ?
"accel":"")));
/* Rewrite the URL in transparent or accelerator mode */
/* NP: there are several cases to traverse here:
* - standard mode (forward proxy)
* - transparent mode (TPROXY)
* - transparent mode with failures
* - intercept mode (NAT)
* - intercept mode with failures
* - accelerator mode (reverse proxy)
* - internal URL
* - mixed combos of the above with internal URL
+ * - remote interception with PROXY protocol
+ * - remote reverse-proxy with PROXY protocol
*/
if (csd->transparent()) {
/* intercept or transparent mode, properly working with no failures */
prepareTransparentURL(csd, http, url, req_hdr);
} else if (internalCheck(url)) {
/* internal URL mode */
/* prepend our name & port */
http->uri = xstrdup(internalLocalUri(NULL, url));
// We just re-wrote the URL. Must replace the Host: header.
// But have not parsed there yet!! flag for local-only handling.
http->flags.internal = true;
} else if (csd->port->flags.accelSurrogate || csd->switchedToHttps()) {
/* accelerator mode */
prepareAcceleratedURL(csd, http, url, req_hdr);
}
if (!http->uri) {
/* No special rewrites have been applied above, use the
@@ -2885,67 +2888,312 @@
bool
ConnStateData::concurrentRequestQueueFilled() const
{
const int existingRequestCount = getConcurrentRequestCount();
// default to the configured pipeline size.
// add 1 because the head of pipeline is counted in concurrent requests
and not prefetch queue
const int concurrentRequestLimit = Config.pipeline_max_prefetch + 1;
// when queue filled already we cant add more.
if (existingRequestCount >= concurrentRequestLimit) {
debugs(33, 3, clientConnection << " max concurrent requests reached ("
<< concurrentRequestLimit << ")");
debugs(33, 5, clientConnection << " deferring new request until one is
done");
return true;
}
return false;
}
/**
+ * Perform forwarded_access ACL tests on the client which
+ * connected to PROXY protocol port to see if we trust the
+ * sender enough to accept their PROXY header claim.
+ */
+bool
+ConnStateData::proxyProtocolValidateClient()
+{
+ ACLFilledChecklist ch(Config.accessList.followXFF, NULL,
clientConnection->rfc931);
+ ch.src_addr = clientConnection->remote;
+ ch.my_addr = clientConnection->local;
+ ch.conn(this);
+
+ if (ch.fastCheck() != ACCESS_ALLOWED)
+ return proxyProtocolError("PROXY client not permitted by ACLs");
+
+ return true;
+}
+
+/**
+ * Perform cleanup on PROXY protocol errors.
+ * If header parsing hits a fatal error terminate the connection,
+ * otherwise wait for more data.
+ */
+bool
+ConnStateData::proxyProtocolError(const char *msg)
+{
+ if (msg) {
+ debugs(33, 2, msg << " from " << clientConnection);
+ mustStop(msg);
+ }
+ return false;
+}
+
+/// magic octet prefix for PROXY protocol version 1
+static const SBuf Proxy10magic("PROXY ", 6);
+
+/// magic octet prefix for PROXY protocol version 2
+static const SBuf
Proxy20magic("\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A", 12);
+
+/**
+ * Test the connection read buffer for PROXY protocol header.
+ * Version 1 and 2 header currently supported.
+ */
+bool
+ConnStateData::parseProxyProtocolHeader()
+{
+ // http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt
+
+ // detect and parse PROXY protocol version 1 header
+ if (in.buf.length() > Proxy10magic.length() &&
in.buf.startsWith(Proxy10magic)) {
+ return parseProxy10();
+
+ // detect and parse PROXY protocol version 2 header
+ } else if (in.buf.length() > Proxy20magic.length() &&
in.buf.startsWith(Proxy20magic)) {
+ return parseProxy20();
+
+ // detect and terminate other protocols
+ } else if (in.buf.length() >= Proxy20magic.length()) {
+ // input other than the PROXY header is a protocol error
+ return proxyProtocolError("PROXY protocol error: invalid header");
+ }
+
+ // TODO: detect short non-magic prefixes earlier to avoid
+ // waiting for more data which may never come
+
+ // not enough bytes to parse yet.
+ return false;
+}
+
+/// parse the PROXY/1.0 protocol header from the connection read buffer
+bool
+ConnStateData::parseProxy10()
+{
+ ::Parser::Tokenizer tok(in.buf);
+ tok.skip(Proxy10magic);
+
+ SBuf tcpVersion;
+ if (!tok.prefix(tcpVersion, CharacterSet::ALPHA+CharacterSet::DIGIT))
+ return proxyProtocolError(tok.atEnd()?"PROXY/1.0 error: invalid
protocol family":NULL);
+
+ if (!tcpVersion.cmp("UNKNOWN")) {
+ // skip to first LF (assumes it is part of CRLF)
+ const SBuf::size_type pos = in.buf.findFirstOf(CharacterSet::LF);
+ if (pos != SBuf::npos) {
+ if (in.buf[pos-1] != '\r')
+ return proxyProtocolError("PROXY/1.0 error: missing CR");
+ // found valid but unusable header
+ in.buf.consume(pos);
+ needProxyProtocolHeader_ = false;
+ return true;
+ }
+ // else, no LF found
+
+ // protocol error only if there are more than 107 bytes prefix header
+ return proxyProtocolError(in.buf.length() > 107? "PROXY/1.0 error:
missing CRLF":NULL);
+
+ } else if (!tcpVersion.cmp("TCP",3)) {
+
+ // skip SP after protocol version
+ if (!tok.skip(' '))
+ return proxyProtocolError(tok.atEnd()?"PROXY/1.0 error: missing
SP":NULL);
+
+ SBuf ipa, ipb;
+ int64_t porta, portb;
+ const CharacterSet ipChars = CharacterSet("IP Address",".:") +
CharacterSet::HEXDIG;
+
+ // parse src-IP SP dst-IP SP src-port SP dst-port CRLF
+ if (!tok.prefix(ipa, ipChars) || !tok.skip(' ') ||
+ !tok.prefix(ipb, ipChars) || !tok.skip(' ') ||
+ !tok.int64(porta) || !tok.skip(' ') ||
+ !tok.int64(portb) || !tok.skip('\r') || !tok.skip('\n'))
+ return proxyProtocolError(!tok.atEnd()?"PROXY/1.0 error: invalid
syntax":NULL);
+
+ in.buf = tok.remaining(); // sync buffers
+ needProxyProtocolHeader_ = false; // found successfully
+
+ // parse IP and port strings
+ Ip::Address originalClient, originalDest;
+
+ if (!originalClient.GetHostByName(ipa.c_str()))
+ return proxyProtocolError("PROXY/1.0 error: invalid src-IP
address");
+
+ if (!originalDest.GetHostByName(ipb.c_str()))
+ return proxyProtocolError("PROXY/1.0 error: invalid dst-IP
address");
+
+ if (porta > 0 && porta <= 0xFFFF) // max uint16_t
+ originalClient.port(static_cast<uint16_t>(porta));
+ else
+ return proxyProtocolError("PROXY/1.0 error: invalid src port");
+
+ if (portb > 0 && portb <= 0xFFFF) // max uint16_t
+ originalDest.port(static_cast<uint16_t>(portb));
+ else
+ return proxyProtocolError("PROXY/1.0 error: invalid dst port");
+
+ // we have original client and destination details now
+ // replace the client connection values
+ debugs(33, 5, "PROXY/1.0 protocol on connection " << clientConnection);
+ clientConnection->local = originalDest;
+ clientConnection->remote = originalClient;
+ clientConnection->flags ^= COMM_TRANSPARENT; // prevent TPROXY
spoofing of this new IP.
+ debugs(33, 5, "PROXY/1.0 upgrade: " << clientConnection);
+
+ // repeat fetch ensuring the new client FQDN can be logged
+ if (Config.onoff.log_fqdn)
+ fqdncache_gethostbyaddr(clientConnection->remote,
FQDN_LOOKUP_IF_MISS);
+
+ return true;
+ }
+
+ return false;
+}
+
+/// parse the PROXY/2.0 protocol header from the connection read buffer
+bool
+ConnStateData::parseProxy20()
+{
+ if ((in.buf[0] & 0xF0) != 0x20) // version == 2 is mandatory
+ return proxyProtocolError("PROXY/2.0 error: invalid version");
+
+ const char command = (in.buf[0] & 0x0F);
+ if ((command & 0xFE) != 0x00) // values other than 0x0-0x1 are invalid
+ return proxyProtocolError("PROXY/2.0 error: invalid command");
+
+ const char family = (in.buf[1] & 0xF0) >>4;
+ if (family > 0x3) // values other than 0x0-0x3 are invalid
+ return proxyProtocolError("PROXY/2.0 error: invalid family");
+
+ const char proto = (in.buf[1] & 0x0F);
+ if (proto > 0x2) // values other than 0x0-0x2 are invalid
+ return proxyProtocolError("PROXY/2.0 error: invalid protocol type");
+
+ const char *clen = in.buf.rawContent() + Proxy20magic.length() + 2;
+ const uint16_t len = ntohs(*(reinterpret_cast<const uint16_t *>(clen)));
+
+ if (in.buf.length() < Proxy20magic.length() + 4 + len)
+ return false; // need more bytes
+
+ in.buf.consume(Proxy20magic.length() + 4); // 4 being the extra bytes
+ const SBuf extra = in.buf.consume(len);
+ needProxyProtocolHeader_ = false; // found successfully
+
+ // LOCAL connections do nothing with the extras
+ if (command == 0x00/* LOCAL*/)
+ return true;
+
+ typedef union proxy_addr {
+ struct { /* for TCP/UDP over IPv4, len = 12 */
+ struct in_addr src_addr;
+ struct in_addr dst_addr;
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ipv4_addr;
+ struct { /* for TCP/UDP over IPv6, len = 36 */
+ struct in6_addr src_addr;
+ struct in6_addr dst_addr;
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ipv6_addr;
+#if NOT_SUPPORTED
+ struct { /* for AF_UNIX sockets, len = 216 */
+ uint8_t src_addr[108];
+ uint8_t dst_addr[108];
+ } unix_addr;
+#endif
+ } pax;
+
+ const pax *ipu = reinterpret_cast<const pax*>(extra.rawContent());
+
+ // replace the client connection values
+ debugs(33, 5, "PROXY/2.0 protocol on connection " << clientConnection);
+ switch (family)
+ {
+ case 0x1: // IPv4
+ clientConnection->local = ipu->ipv4_addr.dst_addr;
+ clientConnection->local.port(ntohs(ipu->ipv4_addr.dst_port));
+ clientConnection->remote = ipu->ipv4_addr.src_addr;
+ clientConnection->remote.port(ntohs(ipu->ipv4_addr.src_port));
+ clientConnection->flags ^= COMM_TRANSPARENT; // prevent TPROXY
spoofing of this new IP.
+ break;
+ case 0x2: // IPv6
+ clientConnection->local = ipu->ipv6_addr.dst_addr;
+ clientConnection->local.port(ntohs(ipu->ipv6_addr.dst_port));
+ clientConnection->remote = ipu->ipv6_addr.src_addr;
+ clientConnection->remote.port(ntohs(ipu->ipv6_addr.src_port));
+ clientConnection->flags ^= COMM_TRANSPARENT; // prevent TPROXY
spoofing of this new IP.
+ break;
+ default: // do nothing
+ break;
+ }
+ debugs(33, 5, "PROXY/2.0 upgrade: " << clientConnection);
+
+ // repeat fetch ensuring the new client FQDN can be logged
+ if (Config.onoff.log_fqdn)
+ fqdncache_gethostbyaddr(clientConnection->remote, FQDN_LOOKUP_IF_MISS);
+
+ return true;
+}
+
+/**
* Attempt to parse one or more requests from the input buffer.
* If a request is successfully parsed, even if the next request
* is only partially parsed, it will return TRUE.
*/
bool
ConnStateData::clientParseRequests()
{
HttpRequestMethod method;
bool parsed_req = false;
debugs(33, 5, HERE << clientConnection << ": attempting to parse");
// Loop while we have read bytes that are not needed for producing the body
// On errors, bodyPipe may become nil, but readMore will be cleared
while (!in.buf.isEmpty() && !bodyPipe && flags.readMore) {
connStripBufferWhitespace(this);
/* Don't try to parse if the buffer is empty */
if (in.buf.isEmpty())
break;
/* Limit the number of concurrent requests */
if (concurrentRequestQueueFilled())
break;
/* Begin the parsing */
PROF_start(parseHttpRequest);
+
+ // try to parse the PROXY protocol header magic bytes
+ if (needProxyProtocolHeader_ && !parseProxyProtocolHeader())
+ break;
+
HttpParserInit(&parser_, in.buf.c_str(), in.buf.length());
/* Process request */
Http::ProtocolVersion http_ver;
ClientSocketContext *context = parseHttpRequest(this, &parser_,
&method, &http_ver);
PROF_stop(parseHttpRequest);
/* partial or incomplete request */
if (!context) {
// TODO: why parseHttpRequest can just return parseHttpRequestAbort
// (which becomes context) but checkHeaderLimits cannot?
checkHeaderLimits();
break;
}
/* status -1 or 1 */
if (context) {
debugs(33, 5, HERE << clientConnection << ": parsed a request");
AsyncCall::Pointer timeoutCall = commCbCall(5, 4,
"clientLifetimeTimeout",
CommTimeoutCbPtrFun(clientLifetimeTimeout, context->http));
@@ -3263,114 +3511,130 @@
sslBumpMode(Ssl::bumpEnd),
switchedToHttps_(false),
sslServerBump(NULL),
#endif
stoppedSending_(NULL),
stoppedReceiving_(NULL)
{
pinning.host = NULL;
pinning.port = -1;
pinning.pinned = false;
pinning.auth = false;
pinning.zeroReply = false;
pinning.peer = NULL;
// store the details required for creating more MasterXaction objects as
new requests come in
clientConnection = xact->tcpClient;
port = xact->squidPort;
log_addr = xact->tcpClient->remote;
log_addr.applyMask(Config.Addrs.client_netmask);
- // ensure a buffer is present for this connection
- in.maybeMakeSpaceAvailable();
-
if (port->disable_pmtu_discovery != DISABLE_PMTU_OFF &&
(transparent() || port->disable_pmtu_discovery ==
DISABLE_PMTU_ALWAYS)) {
#if defined(IP_MTU_DISCOVER) && defined(IP_PMTUDISC_DONT)
int i = IP_PMTUDISC_DONT;
if (setsockopt(clientConnection->fd, SOL_IP, IP_MTU_DISCOVER, &i,
sizeof(i)) < 0)
debugs(33, 2, "WARNING: Path MTU discovery disabling failed on "
<< clientConnection << " : " << xstrerror());
#else
static bool reported = false;
if (!reported) {
debugs(33, DBG_IMPORTANT, "NOTICE: Path MTU discovery disabling is
not supported on your platform.");
reported = true;
}
#endif
}
+}
+
+void
+ConnStateData::start()
+{
+ // ensure a buffer is present for this connection
+ in.maybeMakeSpaceAvailable();
typedef CommCbMemFunT<ConnStateData, CommCloseCbParams> Dialer;
AsyncCall::Pointer call = JobCallback(33, 5, Dialer, this,
ConnStateData::connStateClosed);
comm_add_close_handler(clientConnection->fd, call);
if (Config.onoff.log_fqdn)
fqdncache_gethostbyaddr(clientConnection->remote, FQDN_LOOKUP_IF_MISS);
#if USE_IDENT
if (Ident::TheConfig.identLookup) {
ACLFilledChecklist identChecklist(Ident::TheConfig.identLookup, NULL,
NULL);
- identChecklist.src_addr = xact->tcpClient->remote;
- identChecklist.my_addr = xact->tcpClient->local;
+ identChecklist.src_addr = clientConnection->remote;
+ identChecklist.my_addr = clientConnection->local;
if (identChecklist.fastCheck() == ACCESS_ALLOWED)
- Ident::Start(xact->tcpClient, clientIdentDone, this);
+ Ident::Start(clientConnection, clientIdentDone, this);
}
#endif
clientdbEstablished(clientConnection->remote, 1);
+ needProxyProtocolHeader_ = port->flags.proxySurrogate;
+ if (needProxyProtocolHeader_) {
+ if (!proxyProtocolValidateClient()) // will close the connection on
failure
+ return;
+ }
+
+ // prepare any child API state that is needed
+ BodyProducer::start();
+ HttpControlMsgSink::start();
+
+ // if all is well, start reading
flags.readMore = true;
+ readSomeData();
}
/** Handle a new connection on HTTP socket. */
void
httpAccept(const CommAcceptCbParams ¶ms)
{
MasterXaction::Pointer xact = params.xaction;
AnyP::PortCfgPointer s = xact->squidPort;
// NP: it is possible the port was reconfigured when the call or accept()
was queued.
if (params.flag != Comm::OK) {
// Its possible the call was still queued when the client disconnected
debugs(33, 2, "httpAccept: " << s->listenConn << ": accept failure: "
<< xstrerr(params.xerrno));
return;
}
debugs(33, 4, HERE << params.conn << ": accepted");
fd_note(params.conn->fd, "client http connect");
if (s->tcp_keepalive.enabled) {
commSetTcpKeepalive(params.conn->fd, s->tcp_keepalive.idle,
s->tcp_keepalive.interval, s->tcp_keepalive.timeout);
}
++ incoming_sockets_accepted;
// Socket is ready, setup the connection manager to start using it
ConnStateData *connState = new ConnStateData(xact);
typedef CommCbMemFunT<ConnStateData, CommTimeoutCbParams> TimeoutDialer;
AsyncCall::Pointer timeoutCall = JobCallback(33, 5,
TimeoutDialer, connState,
ConnStateData::requestTimeout);
commSetConnTimeout(params.conn, Config.Timeout.request, timeoutCall);
- connState->readSomeData();
+ AsyncJob::Start(connState);
#if USE_DELAY_POOLS
fd_table[params.conn->fd].clientInfo = NULL;
if (Config.onoff.client_db) {
/* it was said several times that client write limiter does not work
if client_db is disabled */
ClientDelayPools& pools(Config.ClientDelay.pools);
ACLFilledChecklist ch(NULL, NULL, NULL);
// TODO: we check early to limit error response bandwith but we
// should recheck when we can honor delay_pool_uses_indirect
// TODO: we should also pass the port details for myportname here.
ch.src_addr = params.conn->remote;
ch.my_addr = params.conn->local;
for (unsigned int pool = 0; pool < pools.size(); ++pool) {
/* pools require explicit 'allow' to assign a client into them */
if (pools[pool].access) {
@@ -3524,41 +3788,41 @@
debugs(83, 3, "clientNegotiateSSL: FD " << fd << " negotiated cipher " <<
SSL_get_cipher(ssl));
client_cert = SSL_get_peer_certificate(ssl);
if (client_cert != NULL) {
debugs(83, 3, "clientNegotiateSSL: FD " << fd <<
" client certificate: subject: " <<
X509_NAME_oneline(X509_get_subject_name(client_cert), 0, 0));
debugs(83, 3, "clientNegotiateSSL: FD " << fd <<
" client certificate: issuer: " <<
X509_NAME_oneline(X509_get_issuer_name(client_cert), 0, 0));
X509_free(client_cert);
} else {
debugs(83, 5, "clientNegotiateSSL: FD " << fd <<
" has no certificate.");
}
- conn->readSomeData();
+ AsyncJob::Start(conn);
}
/**
* If SSL_CTX is given, starts reading the SSL handshake.
* Otherwise, calls switchToHttps to generate a dynamic SSL_CTX.
*/
static void
httpsEstablish(ConnStateData *connState, SSL_CTX *sslContext, Ssl::BumpMode
bumpMode)
{
SSL *ssl = NULL;
assert(connState);
const Comm::ConnectionPointer &details = connState->clientConnection;
if (sslContext && !(ssl = httpsCreate(details, sslContext)))
return;
typedef CommCbMemFunT<ConnStateData, CommTimeoutCbParams> TimeoutDialer;
AsyncCall::Pointer timeoutCall = JobCallback(33, 5, TimeoutDialer,
connState, ConnStateData::requestTimeout);
commSetConnTimeout(details, Config.Timeout.request, timeoutCall);
=== modified file 'src/client_side.h'
--- src/client_side.h 2014-07-14 09:48:47 +0000
+++ src/client_side.h 2014-07-25 12:05:51 +0000
@@ -313,40 +313,41 @@
\param request if it is not NULL also checks if the pinning info refers
to the request client side HttpRequest
\param CachePeer if it is not NULL also check if the CachePeer is
the pinning CachePeer
\return The details of the server side connection (may be closed
if failures were present).
*/
const Comm::ConnectionPointer validatePinnedConnection(HttpRequest
*request, const CachePeer *peer);
/**
* returts the pinned CachePeer if exists, NULL otherwise
*/
CachePeer *pinnedPeer() const {return pinning.peer;}
bool pinnedAuth() const {return pinning.auth;}
// pining related comm callbacks
void clientPinnedConnectionClosed(const CommCloseCbParams &io);
// comm callbacks
void clientReadRequest(const CommIoCbParams &io);
void connStateClosed(const CommCloseCbParams &io);
void requestTimeout(const CommTimeoutCbParams ¶ms);
// AsyncJob API
+ virtual void start();
virtual bool doneAll() const { return BodyProducer::doneAll() && false;}
virtual void swanSong();
/// Changes state so that we close the connection and quit after serving
/// the client-side-detected error response instead of getting stuck.
void quitAfterError(HttpRequest *request); // meant to be private
/// The caller assumes responsibility for connection closure detection.
void stopPinnedConnectionMonitoring();
#if USE_OPENSSL
/// called by FwdState when it is done bumping the server
void httpsPeeked(Comm::ConnectionPointer serverConnection);
/// Start to create dynamic SSL_CTX for host or uses static port SSL
context.
void getSslContextStart();
/**
* Done create dynamic ssl certificate.
*
* \param[in] isNew if generated certificate is new, so we need to add
this certificate to storage.
@@ -382,40 +383,50 @@
#endif
/* clt_conn_tag=tag annotation access */
const SBuf &connectionTag() const { return connectionTag_; }
void connectionTag(const char *aTag) { connectionTag_ = aTag; }
protected:
void startDechunkingRequest();
void finishDechunkingRequest(bool withSuccess);
void abortChunkedRequestBody(const err_type error);
err_type handleChunkedRequestBody(size_t &putSize);
void startPinnedConnectionMonitoring();
void clientPinnedConnectionRead(const CommIoCbParams &io);
private:
int connFinishedWithConn(int size);
void clientAfterReadingRequests();
bool concurrentRequestQueueFilled() const;
+ /* PROXY protocol functionality */
+ bool proxyProtocolValidateClient();
+ bool parseProxyProtocolHeader();
+ bool parseProxy10();
+ bool parseProxy20();
+ bool proxyProtocolError(const char *reason = NULL);
+
+ /// whether PROXY protocol header is still expected
+ bool needProxyProtocolHeader_;
+
#if USE_AUTH
/// some user details that can be used to perform authentication on this
connection
Auth::UserRequest::Pointer auth_;
#endif
HttpParser parser_;
// XXX: CBDATA plays with public/private and leaves the following
'private' fields all public... :(
#if USE_OPENSSL
bool switchedToHttps_;
/// The SSL server host name appears in CONNECT request or the server ip
address for the intercepted requests
String sslConnectHostOrIp; ///< The SSL server host name as passed in the
CONNECT request
String sslCommonName; ///< CN name for SSL certificate generation
String sslBumpCertKey; ///< Key to use to store/retrieve generated
certificate
/// HTTPS server cert. fetching state for bump-ssl-server-first
Ssl::ServerBump *sslServerBump;
Ssl::CertSignAlgorithm signAlgorithm; ///< The signing algorithm to use
#endif