Robert Collins wrote:
On Tue, 2009-07-07 at 17:01 -0600, Alex Rousskov wrote:
The reason I ask is because we're looking to take a patch that
implements the IETF "websockets" protocol:

    http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-17

I noticed that in section 3.1.3 the spec relies implicitly on CONNECT
being allowed to arbitrary ports.  But this is not the case for default
installs of squid, and thus I fear that the general approach may be flawed.

I think it has several serious challenges; I doubt it could be deployed
in (say) Australia *at all*.

I suppose we could ask that you allow arbitrary CONNECT access (or at
least to the "well-known" websockets ports: 80/81/815).  But I'm
guessing that wouldn't help much, as it would probably take many years
for that change to roll out across the net.
I would also expect many firewalls to block port 81 and 815 by default
so even if Squid allows those ports, websocket clients that do not hide
behind Squid will still have problems (unless websocket is restricted to
proxies).

I'm open to adding 815, maybe 81. Though as Robert says, the utility may not be as much as the authors think.

CONNECT to port-80 by default is IMO not an option. It pretty much defeats all the other HTTP-level security measures. Port-81 is borderline on the risky side. Considering the number of proxies and web-apps which historically use it for regular internal LAN HTTP stuff. Admin who know they need CONNECT to those ports are free to do so of course.

IIRC, there is also another HTTPS port 5-something, and some other protocols needing consideration in the same commit.


So squid has two primary uses:
 - caching
 - security

From a caching perspective, websockets are just overhead - uncacheable
content simply adds overhead to what squid has to do. Just using TCP/IP
would be a lot better and faster.
Many ISP's use interception techniques to deploy squid and other caches.
These operate by performing a MITM attack on TCP/IP traffic. Policy
limits in these deployments are often turned off (because users using
TCP/IP would expect to access any url), while at the same time lifetime
heuristics tend to be turned way up (to maximise the hit rate the ISP
can achieve). As such, I'd expect this spec to have requests fail all
over the place. There are mechanisms for such requests to be reissued on
fresh, direct TCP connections by the router that performed the
interception - but the specific toolchain deployed will vary how
successful that is. Note that this has nothing to do with the use of
Connect: - squid, or other proxies, would see the 'Upgrade:' request.

From a security perspective, there are two sub issues:
 - preventing malicous use (e.g. spam bouncing)
 - policy restrictions (acls, corporate policy, content filtering...)

Malicious use covers things like not being an open proxy, and not
permitting connections to SMTP servers for any clients. This is where
the default Connect limit comes in - and also the prevention of http
requests to port 25.

Any thoughts you have on the matter would be much appreciated.   In
general, it seems that handling proxies correctly is turning out to be
one of the trickier parts of implementing a websockets-like API, so we
may want to pick your brains some more in the future with other ideas.
What is the motivation behind adding more ports? Can websocket use port
80 (without CONNECT) and port 443 (with CONNECT)? I have noticed there
is something about the Upgrade header in the websocket draft so perhaps
that mechanism can be polished to implement a proper protocol switch? Or
is that too late?

I'd like to understand the motivation behind not just using TCP/IP with
a SOCKS proxy. The whole websockets thing seems like baroque cruft, TBH.
A lot of complexity, more layers that things can go wrong in (and that
security holes can be found in).

2) A totally unrelated issue:  I assume by now you're aware that Firefox
(and all other major browsers besides Opera) now no longer renders
replies from squid or other proxies for failed HTTPS requests.   For
details see

    https://bugzilla.mozilla.org/show_bug.cgi?id=479880

I meant to email you about this before the patch went in, but then I got
busy :)   I'm not sure that there's much to talk about now that the fix
is in (it's gross, and we theoretically ought to be able to do better,
but it would be a lot of work, so I don't think it's going to happen). But I wanted you to know. It's really just a UI issue (you can tell I'm
a systems programmer...)

Thanks. It would be nice if mozilla would create a context *for the
proxy* and use that to show the non 200 response. Debuggability is
kindof important :).

These are the reponse codes I'm aware of people trying to use with Squid CONNECT. For now the admins are hacking around the browsers that don't support custom errors.

Taking 404 access denied as assumed.

 * the authenticated proxy (407) response needing a login popup.

* the HTTPS->HTTP downgrade response (301). This seems to be popular for some reason where corporate policy prohibits end-to-end HTTPS with external sources. Our nasty SslBump may cover this case though. I'm open to leaving it a problem :)

* the captive-portal access response (303,305, 'old'-style setups: 302). These are basic requirements for captive portal design. CONNECT requests may loop in silent failures without it. There are actual instances already of that scripting attack your bug mentions, being used as the portal login mechanism because the secure redirect with 302 does not always work. HTTP <meta /> redirects are another active-body case they don't mention in the report.

* as HTTP/1.1 is implemented its likely that Squid will generate (402,403,405,408,413,414,505) messages in response to CONNECT. Some are already generated now. Being overridden with something sensible is no bother to Squid, but the proxy page may be useful to you.

There may be others I'm not aware of.

As Robert points out a proxy context to display these proxy-related errors safely. Particularly if containing any kind of active body.

I'm open to changing any of the error codes and messages Squid is providing that do not match RFC 2616 yet if it would assist. Just point any out found. A few of the 5xx are already known and status changes are on the roadmap.


P.S. So every time that I set up squid on my machine to test something,
it always denies access to me out of the box.  I finally figured out
it's because you don't allow localhost connections by default.  Should
you be adding a line like

   acl localnet src localhost

to squid.conf?  Is there a reason why you're allowing 10.0.0.1, etc. to
connect, but not localhost?

I'd be open to us changing this. It is a [small] risk for a bastion host
to allow connections from itself because a different account being
compromised then allows access via the proxy. I have no evidence to make
an assertion about the frequency of deployments on a bastion host vs
behind one, and so the only argument I have for preserving it is 'secure
as possible by default', which while a good argument isn't the end of
the discussion.

-Rob

It was an oversight on my part when making the RFC1918 additions. The security issues balance out in different ways as far as I can see for both bastion hosts and internal workstations.

I'm planning to correct it shortly.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9

Reply via email to