Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-11-02 15:35, Grant Taylor wrote: On 11/1/22 6:27 PM, squid3 wrote: The working ones deliver an HTTP/1.1 302 redirect to their companies homepage if the request came from outside the company LAN. If the request came from an administrators machine it may respond with stats data about the node being probed. I suspect that Squid et al. could do similar. ;-) Yes, they can be configured to do so if you need it. Neither outcome avoids the problem that the client was trying to interact with a resource entirely different on another server whose info has been lost implicitly by the protocol syntax. I take it from your statement you have not worked on networks like web-cafes, airports, schools, hospitals, public shopping malls who all use captive portal systems, or high-security institutions capturing traffic for personnel activity audits. I have worked in schools, and other public places, some of which had a captive portal that intercepted to a web server to process registration or flat blocked non-proxied traffic. The proxy server in those cases was explicit. They missed a trick then. If the registration process is simple, it can be done by Squid with a session helper and two listening ports. We even ship some ERR_AGENT_* templates for captive portals use. The current default doesn't work on servers using NLD Active API Server. Reference? Google is not providing me with anything HTTP capable by that name or the obvious sub-sets. And you were specifying the non-default-'http-alt' port via the "http://"; scheme in yours. Either way these are two different HTTP syntax with different "default port" values. An agent supporting the http:// URL treats it as a request for some resource at the HTTP origin server indicated by the URL authority part or Host header. An agent supporting the http-alt:// URL treats it as a request to forward-proxy the request-target specified in the URL query segment, using the upstream proxy indicated by the URL authority part or Host header. If I'm understanding correctly, this is a case of someone asking Bob to connect to Bob. That's not a thing. Just talk directly to Bob. http-alt://bob?http://alice/some/resource Is instructing a client to ask proxy (Bob) to fetch /some/resource from origin (Alice). All the client "explicit configuration" is in the URL, rather than client config files or environment variables. The ones I am aware of are: * HTTP software testing and development * IoT sensor polling * printer network bootstrapping * manufacturing controller management * network stability monitoring systems Why is anything developed in the last two decades green fielding with HTTP/0.9?!?!?! The IoT stuff at least. The others are getting old, but more like 10+ years rather than 20+. Cheers Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 11/1/22 6:27 PM, squ...@treenet.co.nz wrote: No, you cropped my use-case description. It specified a client which was *unaware* that it was talking to a forward-proxy. Sorry, that was unintentional. Such a client will send requests that only a reverse-proxy or origin server can handle properly - because they have explicit special configuration to do so. ACK In all proxying cases there is special configuration somewhere. For forward-proxy it is in the client (or its OS so-called "default"), for reverse-proxy it is in the proxy, for interception-proxy it is in both the network and the proxy. ACK The working ones deliver an HTTP/1.1 302 redirect to their companies homepage if the request came from outside the company LAN. If the request came from an administrators machine it may respond with stats data about the node being probed. I suspect that Squid et al. could do similar. ;-) Almost all the installs I have worked on had interception as part of their configuration. Fair enough. It is officially recommended to include interception as a backup to explicit forward-proxy for networks needing full traffic control and/or monitoring. I've taken things one step further. I forego the interception and simply have the firewall / router hard block traffic not from the proxy server. }:-) But short of that, I see and acknowledge the value of interception. I take it from your statement you have not worked on networks like web-cafes, airports, schools, hospitals, public shopping malls who all use captive portal systems, or high-security institutions capturing traffic for personnel activity audits. I have worked in schools, and other public places, some of which had a captive portal that intercepted to a web server to process registration or flat blocked non-proxied traffic. The proxy server in those cases was explicit. There are also at least a half dozen nation states with national firewalls doing traffic monitoring and censorship. At least 3 of the ones I know of use Squid's for the HTTP portion. I'm aware of a small number of such nation states. I assume that there are many more. I was not aware that Squid played in that arena. ACK. That is you. I am coming at this from the maintainer viewpoint where the entire community's needs have to be balanced. I maintain that the /default/ does not have to work for /all/ use cases. I agree that the /default/ should work for /most/ use cases. The current default doesn't work on servers using NLD Active API Server. Ergo the current default doesn't work on /all/ use cases. }:-) And you were specifying the non-default-'http-alt' port via the "http://"; scheme in yours. Either way these are two different HTTP syntax with different "default port" values. An agent supporting the http:// URL treats it as a request for some resource at the HTTP origin server indicated by the URL authority part or Host header. An agent supporting the http-alt:// URL treats it as a request to forward-proxy the request-target specified in the URL query segment, using the upstream proxy indicated by the URL authority part or Host header. If I'm understanding correctly, this is a case of someone asking Bob to connect to Bob. That's not a thing. Just talk directly to Bob. The ones I am aware of are: * HTTP software testing and development * IoT sensor polling * printer network bootstrapping * manufacturing controller management * network stability monitoring systems Why is anything developed in the last two decades green fielding with HTTP/0.9?!?!?! I doubt anyone can quantify it accurately. But worldwide use of HTTP/1.1 is also dropping, and at a faster rate than 0.9/1.0 right now as the more efficient HTTP/2+ expand. Sure. There should be three categories of migrations: HTTP/0.9 to something HTTP/1.0 to something HTTP/1.1 to HTTP/2 I sincerely hope that the somethings are going to HTTP/1.1 or HTTP/2. HTTP/1.1 specification requires semantic compatibility. So long as 1.1 is still a thing the older versions are likely to remain as well. Undesirable as that may be. ACK -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-11-02 09:03, Grant Taylor wrote: On 11/1/22 1:24 PM, squid3 wrote: No I meant W3C. Back in the before times things were a bit messy. Hum. I have more questions than answers. I'm not aware of W3C ever assigning ports. I thought it was /always/ IANA. Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt port would probably be best if we did go official. ACK You see my point I hope. A gateway proxy that returns an error to *every* request is not very good. Except it's not "/ever/ /request/" It's "/every/ /request/ /of/ /a/ /specific/ /type/" where type is an HTTP version. No, you cropped my use-case description. It specified a client which was *unaware* that it was talking to a forward-proxy. Such a client will send requests that only a reverse-proxy or origin server can handle properly - because they have explicit special configuration to do so. In all proxying cases there is special configuration somewhere. For forward-proxy it is in the client (or its OS so-called "default"), for reverse-proxy it is in the proxy, for interception-proxy it is in both the network and the proxy. What does CloudFlare or any of the other big proxy services or even other proxy applications do if you send them an HTTP/1.0 or even HTTP/0.9 request without the associated Host: header? The working ones deliver an HTTP/1.1 302 redirect to their companies homepage if the request came from outside the company LAN. If the request came from an administrators machine it may respond with stats data about the node being probed. There is no "configured proxy" for this use-case. Those are the two most/extremely common instances of the problematic use-cases. All implicit use of proxy (or gateway) have the same issue. How common is the (network) transparent / intercepting / implicit use of Squid (or any proxy for that matter)? All of the installs that I've worked on (both as a user and as an administrator) have been explicit / non-transparent. Almost all the installs I have worked on had interception as part of their configuration. It is officially recommended to include interception as a backup to explicit forward-proxy for networks needing full traffic control and/or monitoring. I take it from your statement you have not worked on networks like web-cafes, airports, schools, hospitals, public shopping malls who all use captive portal systems, or high-security institutions capturing traffic for personnel activity audits. There are also at least a half dozen nation states with national firewalls doing traffic monitoring and censorship. At least 3 of the ones I know of use Squid's for the HTTP portion. I think you are getting stuck with the subtle difference between "use for case X" and "use by default". ANY port number can be used for *some* use-case(s). Sure. "by default" has to work for *all* use-cases. I disagree. ACK. That is you. I am coming at this from the maintainer viewpoint where the entire community's needs have to be balanced. Note that you are now having to add a non-default port "8080" and path "/" to the URL to make it valid/accepted by the Browser. You were already specifying the non-default-http port via the "http-alt://" scheme in your example. And you were specifying the non-default-'http-alt' port via the "http://"; scheme in yours. Either way these are two different HTTP syntax with different "default port" values. An agent supporting the http:// URL treats it as a request for some resource at the HTTP origin server indicated by the URL authority part or Host header. An agent supporting the http-alt:// URL treats it as a request to forward-proxy the request-target specified in the URL query segment, using the upstream proxy indicated by the URL authority part or Host header. Clients speaking HTTP origin-form (the http:// scheme) are not permitted to request tunnels or equivalent gateway services. They can only ask for resource representations. I question the veracity of that. Mostly around said client's use of an explicit proxy. It is clear side-effect of the fact that tunnels cannot be opened by requesting an origin-form URL (eg "/index.html"). They require an authority-form URI (eg "example.com:80"). See https://www.rfc-editor.org/rfc/rfc9110.html#name-intermediaries for definitions of intermediary and role scopes. Note that it explicitly says (requires) absolute-URI for "proxy" (aka forward-proxy) intermediaries. Clients do not speak origin-form to explicit proxies. [yes I know the first paragraph says an intermediary may switch behaviour based on just the request, that is for HTTP/2+. Squid being 1.1 is more restricted by the legacy issues]. Port is just a number, it can be anything *IF* it is made explicit. The scheme determines what protocol syntax is being spoken and thus what restrictions and/or requirements are. ... and so the protocol for talking to
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 11/1/22 1:24 PM, squ...@treenet.co.nz wrote: No I meant W3C. Back in the before times things were a bit messy. Hum. I have more questions than answers. I'm not aware of W3C ever assigning ports. I thought it was /always/ IANA. Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt port would probably be best if we did go official. ACK You see my point I hope. A gateway proxy that returns an error to *every* request is not very good. Except it's not "/ever/ /request/" It's "/every/ /request/ /of/ /a/ /specific/ /type/" where type is an HTTP version. What does CloudFlare or any of the other big proxy services or even other proxy applications do if you send them an HTTP/1.0 or even HTTP/0.9 request without the associated Host: header? There is no "configured proxy" for this use-case. Those are the two most/extremely common instances of the problematic use-cases. All implicit use of proxy (or gateway) have the same issue. How common is the (network) transparent / intercepting / implicit use of Squid (or any proxy for that matter)? All of the installs that I've worked on (both as a user and as an administrator) have been explicit / non-transparent. I think you are getting stuck with the subtle difference between "use for case X" and "use by default". ANY port number can be used for *some* use-case(s). Sure. "by default" has to work for *all* use-cases. I disagree. Note that you are now having to add a non-default port "8080" and path "/" to the URL to make it valid/accepted by the Browser. You were already specifying the non-default-http port via the "http-alt://" scheme in your example. Clients speaking HTTP origin-form (the http:// scheme) are not permitted to request tunnels or equivalent gateway services. They can only ask for resource representations. I question the veracity of that. Mostly around said client's use of an explicit proxy. Port is just a number, it can be anything *IF* it is made explicit. The scheme determines what protocol syntax is being spoken and thus what restrictions and/or requirements are. ... and so the protocol for talking to a webcache service is http-alt://. Whose default port is not 80 nor 443 for all the same reasons why Squid default listening port is 3128. If we wanted to we could easily switch Squid default port to http-alt/8080 without causing technical issues. But it would be annoying to update all the existing documentation around the Internet, so not worth the effort changing now. Ditto. Though the legacy install base has a long long long tail. 26 years after HTTP/1.0 came out and HTTP/0.9 still has use-cases alive. Where is HTTP/0.9 still being used? Decreasing, but still a potentially significant amount of traffic seen by Squid in general. Can you, or anyone else, quantify what "a potentially significant amount of traffic" is? Do these cases *really* /need/ to be covered by the /default/ configuration? Or can they be addressed by a variation from the default configuration? Ah, if you have been treating it like an irrelevant elephant that is your confusion. The "but not always" is a critical detail in the puzzle - its side-effects are the answer to your initial question of *why* Squid defaults to X instead of 80/443. I have no problems using non-default for the "but not always" configurations. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-11-01 11:38, Grant Taylor wrote: On 10/30/22 6:59 AM, squ...@treenet.co.nz wrote: Duane W. would be the best one to ask about the details. What I know is that some 10-12 years ago I discovered an message by Duane mentioning that W3C had (given or accepted) port 3128 for Squid use. I've checked the squid-cache archives and not seeing the message. Right now it looks like the W3C changed their systems and only track the standards documents. So I cannot reference their (outdated?) protocol registry :-{ . Also checked the squid-cache archives and not finding it email history. Sorry. Did you by chance mean IANA? No I meant W3C. Back in the before times things were a bit messy. I looked and 3128 is registered to something other than Squid. Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt port would probably be best if we did go official. Nor did their search bring anything up for Squid. I mean "authority" as used by HTTP specification, which refers to https://www.rfc-editor.org/rfc/rfc3986#section-3.2 Yes exactly. That is the source of the problem, perpetuated by the need to retain on-wire byte/octet backward compatibility until HTTP/2 changed to binary format. Consider what the proxy has to do when (not if) the IP:port being connected to are that proxy's (eg localhost:80) and the URL is only a path ("/") on an origin server somewhere else. Does the "GET / HTTP/1.0" mean "http://example.com/"; or "http://example.net/"; ? I would hope that it would return an error page, much like Squid does when it can't resolve a domain name or the connection times out. You see my point I hope. A gateway proxy that returns an error to *every* request is not very good. The key point is that the proxy host:port and the origin host:port are two different authority and only the origin may be passed along in the URL (or URL+Host header). Agreed. When the client uses port 80 and 443 thinking they are origin services it is *required* (per https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit the real origins info. Enter problems. Why would a client (worth it's disk space) ever conflate the value of it's configured proxy as the origin server? There is no "configured proxy" for this use-case. I can see a potential for confusion when using (network) transparent / intercepting proxies. Those are the two most/extremely common instances of the problematic use-cases. All implicit use of proxy (or gateway) have the same issue. The defaults though are tuned for origin server (or reverse-proxy) direct contact. I don't see how that precludes their use for (forward) proxy servers. I think you are getting stuck with the subtle difference between "use for case X" and "use by default". ANY port number can be used for *some* use-case(s). "by default" has to work for *all* use-cases. No Browser I know supports "http-alt://proxy.example.com?http://origin.example.net/index.html"; URLs. But I bet that many browsers would support: http://proxy.example.com:8080/?http://origin.example.net/index.html Note that you are now having to add a non-default port "8080" and path "/" to the URL to make it valid/accepted by the Browser. Clients speaking HTTP origin-form (the http:// scheme) are not permitted to request tunnels or equivalent gateway services. They can only ask for resource representations. Also, I'm talking about "http://"; and "https://"; using their default ports of 80 & 443. Port is just a number, it can be anything *IF* it is made explicit. The scheme determines what protocol syntax is being spoken and thus what restrictions and/or requirements are. ... and so the protocol for talking to a webcache service is http-alt://. Whose default port is not 80 nor 443 for all the same reasons why Squid default listening port is 3128. If we wanted to we could easily switch Squid default port to http-alt/8080 without causing technical issues. But it would be annoying to update all the existing documentation around the Internet, so not worth the effort changing now. It is based on experience. Squid used to be a lot more lenient and tried for decades to do the syntax auto-detection. The path from that to separate ports is littered with CVEs. Most notably the curse that keeps on giving: CVE-2009-0801, which is just the trigger issue for a whole nest of bad side effects. I wonder how much of that problematic history was related to HTTP/0.9 vs HTTP/1.0 vs HTTP/1.1 clients. Ditto. Though the legacy install base has a long long long tail. 26 years after HTTP/1.0 came out and HTTP/0.9 still has use-cases alive. I similarly wonder how much HTTP/1.0, or even HTTP/0.9, protocol is used these days. Decreasing, but still a potentially significant amount of traffic seen by Squid in general. Also, there is the elephant in the room of we're talking about a proxy server which is frequ
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/30/22 6:59 AM, squ...@treenet.co.nz wrote: Duane W. would be the best one to ask about the details. What I know is that some 10-12 years ago I discovered an message by Duane mentioning that W3C had (given or accepted) port 3128 for Squid use. I've checked the squid-cache archives and not seeing the message. Right now it looks like the W3C changed their systems and only track the standards documents. So I cannot reference their (outdated?) protocol registry :-{ . Also checked the squid-cache archives and not finding it email history. Sorry. Did you by chance mean IANA? I looked and 3128 is registered to something other than Squid. Nor did their search bring anything up for Squid. I mean "authority" as used by HTTP specification, which refers to https://www.rfc-editor.org/rfc/rfc3986#section-3.2 Yes exactly. That is the source of the problem, perpetuated by the need to retain on-wire byte/octet backward compatibility until HTTP/2 changed to binary format. Consider what the proxy has to do when (not if) the IP:port being connected to are that proxy's (eg localhost:80) and the URL is only a path ("/") on an origin server somewhere else. Does the "GET / HTTP/1.0" mean "http://example.com/"; or "http://example.net/"; ? I would hope that it would return an error page, much like Squid does when it can't resolve a domain name or the connection times out. The key point is that the proxy host:port and the origin host:port are two different authority and only the origin may be passed along in the URL (or URL+Host header). Agreed. When the client uses port 80 and 443 thinking they are origin services it is *required* (per https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit the real origins info. Enter problems. Why would a client (worth it's disk space) ever conflate the value of it's configured proxy as the origin server? I can see a potential for confusion when using (network) transparent / intercepting proxies. I refer to all the many ways the clients may be explicitly or implicitly configured to be aware that it is talking to a proxy - such that it explicitly avoids sending the problematic origin-form URLs. ACK The defaults though are tuned for origin server (or reverse-proxy) direct contact. I don't see how that precludes their use for (forward) proxy servers. No Browser I know supports "http-alt://proxy.example.com?http://origin.example.net/index.html"; URLs. But I bet that many browsers would support: http://proxy.example.com:8080/?http://origin.example.net/index.html Also, I'm talking about "http://"; and "https://"; using their default ports of 80 & 443. ... "at your own risk" they technically might be. So long as you only receive one of the three types of syntax there - port 80/443 being officially registered for origin / reverse-proxy syntax. I've been using them without any known problem for multiple years across multiple platforms, clients, and versions thereof. So I'll keep using it at my own risk. It is based on experience. Squid used to be a lot more lenient and tried for decades to do the syntax auto-detection. The path from that to separate ports is littered with CVEs. Most notably the curse that keeps on giving: CVE-2009-0801, which is just the trigger issue for a whole nest of bad side effects. I wonder how much of that problematic history was related to HTTP/0.9 vs HTTP/1.0 vs HTTP/1.1 clients. I similarly wonder how much HTTP/1.0, or even HTTP/0.9, protocol is used these days. Also, there is the elephant in the room of we're talking about a proxy server which is frequently, but not always, on a dedicated system or IP. As such, I have no problem predicating the use of the HTTP(80) and HTTPS(443) ports when there is no possible chance of confusion between forward proxy roles and origin server / reverse proxy roles. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 2022-10-23 06:10, Grant Taylor wrote: On 10/21/22 11:30 PM, Amos Jeffries wrote: Not just convention. AFAICT was formally registered with W3C, before everyone went to using IETF for registrations. Please elaborate on what was formally registered. I've only seen 3128 / 3129 be the default for Squid (and a few things emulating squid). Other proxies of the time, namely Netscape's and Microsoft's counterparts, tended to use 8080. I'd genuinely like to learn more about and understand the history / etymology / genesis of the 3128 / 3129. Duane W. would be the best one to ask about the details. What I know is that some 10-12 years ago I discovered an message by Duane mentioning that W3C had (given or accepted) port 3128 for Squid use. I've checked the squid-cache archives and not seeing the message. Right now it looks like the W3C changed their systems and only track the standards documents. So I cannot reference their (outdated?) protocol registry :-{ . Also checked the squid-cache archives and not finding it email history. Sorry. FYI, discussion started ~30 years ago. ACK The problem: For bandwidth savings HTTP/1.0 defined different URL syntax for origin and relay/proxy requests. The form sent to an origin server lacks any information about the authority. That was expected to be known out-of-band by the origin itself. HTTP/1.1 has attempted several different mechanisms to fix this over the years. None of them has been universally accepted, so the problem remains. The best we have is mandatory Host header which most (but sadly not all) clients and servers use. HTTP/2 cements that design with mandatory ":authority" pseudo-header field. So the problem is "fixed"for native HTTP/2+ traffic. But until HTTP/1.0 and broken HTTP/1.1 clients are all gone the issue will still crop up. I'm not entirely sure what you mean by "the authority". I'm taking it to mean the identity of the service that you are wanting content from. The Host: header comment with HTTP/1.1 is what makes me think this. I mean "authority" as used by HTTP specification, which refers to https://www.rfc-editor.org/rfc/rfc3986#section-3.2 My understanding is that neither HTTP/0.9 nor HTTP/1.0 had a Host: header and that it was assumed that the IP address you were connecting to conveyed the server that you were wanting to connect to. Yes exactly. That is the source of the problem, perpetuated by the need to retain on-wire byte/octet backward compatibility until HTTP/2 changed to binary format. Consider what the proxy has to do when (not if) the IP:port being connected to are that proxy's (eg localhost:80) and the URL is only a path ("/") on an origin server somewhere else. Does the "GET / HTTP/1.0" mean "http://example.com/"; or "http://example.net/"; ? More importantly the proxy hostname:port the client is opening TCP connections to may be different from the authority-info specified in the HTTP request message (or lack thereof). My working understanding of what the authority is seems to still work with this. The key point is that the proxy host:port and the origin host:port are two different authority and only the origin may be passed along in the URL (or URL+Host header). When the client uses port 80 and 443 thinking they are origin services it is *required* (per https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit the real origins info. Enter problems. This crosses security boundaries and involves out-of-band information sources at all three endpoints involved in the transaction for the message semantics and protocol negotiations to work properly. I feel like the nature of web traffic tends to frequently, but not always, cross security / administrative boundaries. As such, I don't think that existence of proxies in the communications path alters things much. Please elaborate on what out-of-band information you are describing. The most predominant thing that comes to mind, particularly with HTTP/1.1 and HTTP/2 is name resolution -- ostensibly DNS -- to identify the IP address to connect to. I refer to all the many ways the clients may be explicitly or implicitly configured to be aware that it is talking to a proxy - such that it explicitly avoids sending the problematic origin-form URLs. What that text does not say is that when they are omitted by the **user** they are taken from configuration settings in the OS: * the environment variable name provides: - the protocol name ("http" or "HTTPS", aka plain-text or encrypted) - the expected protocol syntax/semantics ("proxy" aka forward-proxy) * the machine /etc/services configuration provides the default port for the named protocol. Ergo the use of /default/ values when values are not specified. The defaults though are tuned for origin server (or reverse-proxy) direct contact. No Browser I know supports "http-alt://proxy.example.com?http://origin.e
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 2:43 AM, Matus UHLAR - fantomas wrote: These are the FTP protocol "hacks" I mentioned before. FYI RFC 1919: Classical verses Transparent IP Proxies § 4.1 -- Transparent proxy connection example -- describes the operation of an intercepting / (network) transparent FTP proxy that does not require any FTP protocol hacks. }:-) -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 1:09 PM, Grant Taylor wrote: It seems as if "transparent" in the context of proxies is as ambiguous as "secure" in the context of VPNs. The former can be "data transparent" and / or "network transparent". The latter can be "privacy secure" and / or "integrity secure". }:-) Oy vey. For completeness -- I've continued reading -- RFC 1919: Classical verses Transparent IP Proxies § 4 -- Transparent application proxies -- ¶ 3 starts with: "A transparent application proxy is often described as a system that appears like a packet filter to clients, and like a classical proxy to servers." So as I read it, RFC 1919 § 4 ¶ 3 supports "network transparency". Then it continues with: "Apart from this important concept, transparent and classical proxies can do similar access control checks and can offer an equivalent level of security/robustness/performance, at least as far as the proxy itself is concerned." Which reads as if /network/ transparent proxies can be /data/ non-transparent. Nomenclature and consistent definitions can be hard and can easily sideline discussions. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 1:01 PM, Matus UHLAR - fantomas wrote: sorry, this one is from 7230, section 2.3 Thank you for the reference. If we don't use "data" and "network" in addition to "transparent", result is ambiguous. "intercepting proxy" is not. Agreed. It seems as if "transparent" in the context of proxies is as ambiguous as "secure" in the context of VPNs. The former can be "data transparent" and / or "network transparent". The latter can be "privacy secure" and / or "integrity secure". }:-) -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 12:57 PM, Matus UHLAR - fantomas wrote: That is why I prefer using "intercepting proxy" for case where connections between clients and servers intercepted by proxy, without it being configured in browsers. Fair enough. precisely, so what exactly aren't you convinced about? :-) The term "transparent" having multiple meanings. I believe we were talking past each other and now are not. Have you noticed this with SOCKS server? Yes, DANTE SOCKS server is exactly where I first read about the limitation that I'm talking about. Subsequent reading of other SOCKS servers supported this limitation. N.B. I'm specifically talking about how a SOCKS aware (FTP) client can ask that an external port be connected to the SOCKS client for a defined period of time (ten minutes in the examples I saw). This is sufficient for most active FTP connections (presuming that the ftp client is also the socks client) as the data connection from the FTP server comes back to the SOCKS server ~> FTP client in short order. I guess this applies for firewalls that will disable connections to the port later. But the same applies for PASV connections and the reply when firewall at serer side is used. Agreed. Aside: I don't think I've ever seen SOCKS be used to front public services. Rather I've only ever seen SOCKS used for (private) clients. When ssl/tls is used between client and server, intermediate gateways and firewalls don't know what ports do endpoints agree on using PORT/PASV. Unless they intercept SSL conneciton (which kind of makes them FTP endpoints) or the client supports and issues FTP command "CCC" which is designed for this case. I'm afraid not many FTP clients do that. Agreed. I think this middle box behavior is far more common on HTTPS in larger data centers where the middle box is used to enforce compliance and the likes. agree. the workaround is to use static list of ports at server side and configure server firewall to statically allow connection to these ports (optionally NAT them). Yep. however this is already not a SQUID issue. Agreed. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 10:18 AM, Matus UHLAR - fantomas wrote: term "interception proxy" better defines what happens here: Instead, an interception proxy filters or redirects outgoing TCP port 80 packets (and occasionally other common port traffic). On 25.10.22 12:52, Grant Taylor wrote: Where did you pull that quote from? I don't see "interception" anywhere in RFC 2616. sorry, this one is from 7230, section 2.3 Aside: I'm thinking that we're having term collisions between "data transparency" and "network transparency". Wherein a data transparent proxy doesn't modify the requested content and a network transparent proxy is a proxy that the client isn't aware that it's using. If we don't use "data" and "network" in addition to "transparent", result is ambiguous. "intercepting proxy" is not. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. Support bacteria - they're the only culture some people have. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 11:03 AM, Matus UHLAR - fantomas wrote: I think intercepting is better, more precise. On 25.10.22 12:14, Grant Taylor wrote: I think that Squid can be an interception proxy as it can filter / alter content. I also think that Squid (as an interception proxy) can be used transparently. intercepting connections and modifying content are two separate functionalities, and "transparent proxy" is in RFC2616 defined as "not doing the latter" while many people understand it as "doing the former". That is why I prefer using "intercepting proxy" for case where connections between clients and servers intercepted by proxy, without it being configured in browsers. those two are completely separate, I'm not yet convinced. both functionalities described above are independent on each other, squid also supports both separately, which gives us four combinations. proxy may be intercepting and modify content (e.g. filter), including squid. I guess it could be said that the transparency, or modification of content, is one aspect and that how the client connects to the proxy, explicit or implicit (network magic), could be another aspect. +-++ | transparent | opaque | +--+-++ | explicit | 2 | 1| +--+-++ | implicit | 3 | 4| +--+-++ I believe that Squid can be either transparent and / or opaque depending on it's configuration. precisely, so what exactly aren't you convinced about? :-) I also believe that Squid can be either explicit and / or implicit via networking magic. When I said that intercepting was a superset of transparent, I was including all four quadrants. I guess intercepting is what you have in second row, while transparent is the first column, that doesn't seem as superset to me. I guess PORT connections have to be allowed on the SOCKS server which is I'd say not common (can be dangerous) Yes, the PORT connection must be allowed. But the problem that I found was that the PORT declaration has a timeout / finite time that they would wait for connections. E.g. ten minutes in the example I was looking at. Have you noticed this with SOCKS server? I guess this applies for firewalls that will disable connections to the port later. But the same applies for PASV connections and the reply when firewall at serer side is used. What's more is that the PORT connections must be declared /per/ /expected/ /connection/. They aren't a generic forward traffic from any Internet connected system into the SOCKS client. passive connections are safe in case of ftp/ssl, where it's impossible to know for the proxy/firewall who connects where. I don't think that it's impossible. Rather it's just improbable. When ssl/tls is used between client and server, intermediate gateways and firewalls don't know what ports do endpoints agree on using PORT/PASV. Unless they intercept SSL conneciton (which kinf od makes them FTP endpoints) or the client supports and issues FTP command "CCC" which is designed for this case. I'm afraid not many FTP clients do that. It's technically possible to do TLS bump in the wire or other things like known keys (non-ephemeral / non-PFS) or sharing ephemeral / PFS keys from internal server with TLS monkey in the middle proxy. Such is technically possible, just highly improbable. agree. the workaround is to use static list of ports at server side and configure server firewall to statically allow connection to these ports (optionally NAT them). however this is already not a SQUID issue. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. I intend to live forever - so far so good. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 10:18 AM, Matus UHLAR - fantomas wrote: term "interception proxy" better defines what happens here: Instead, an interception proxy filters or redirects outgoing TCP port 80 packets (and occasionally other common port traffic). Where did you pull that quote from? I don't see "interception" anywhere in RFC 2616. Aside: I'm thinking that we're having term collisions between "data transparency" and "network transparency". Wherein a data transparent proxy doesn't modify the requested content and a network transparent proxy is a proxy that the client isn't aware that it's using. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 11:03 AM, Matus UHLAR - fantomas wrote: I think intercepting is better, more precise. I think that Squid can be an interception proxy as it can filter / alter content. I also think that Squid (as an interception proxy) can be used transparently. those two are completely separate, I'm not yet convinced. proxy may be intercepting and modify content (e.g. filter), including squid. I guess it could be said that the transparency, or modification of content, is one aspect and that how the client connects to the proxy, explicit or implicit (network magic), could be another aspect. +-++ | transparent | opaque | +--+-++ | explicit | 2 | 1| +--+-++ | implicit | 3 | 4| +--+-++ I believe that Squid can be either transparent and / or opaque depending on it's configuration. I also believe that Squid can be either explicit and / or implicit via networking magic. When I said that intercepting was a superset of transparent, I was including all four quadrants. yes, especially PAC scripts are great to explicitly state what you need, including using socks for other than http(s)/ftp connections (direct smtp,imap,pop3 over socks) Yep. I guess PORT connections have to be allowed on the SOCKS server which is I'd say not common (can be dangerous) Yes, the PORT connection must be allowed. But the problem that I found was that the PORT declaration has a timeout / finite time that they would wait for connections. E.g. ten minutes in the example I was looking at. What's more is that the PORT connections must be declared /per/ /expected/ /connection/. They aren't a generic forward traffic from any Internet connected system into the SOCKS client. passive connections are safe in case of ftp/ssl, where it's impossible to know for the proxy/firewall who connects where. I don't think that it's impossible. Rather it's just improbable. It's technically possible to do TLS bump in the wire or other things like known keys (non-ephemeral / non-PFS) or sharing ephemeral / PFS keys from internal server with TLS monkey in the middle proxy. Such is technically possible, just highly improbable. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 10:18 AM, Matus UHLAR - fantomas wrote: I prefer to explicitly state what one means by transparent because RFC2616 has defined transparent proxy diferently: On 25.10.22 10:56, Grant Taylor wrote: I do too. I /thought/ that I was explicitly stating. At least that was my intention. Aside: That's why I included my working definition. So hopefully you would know what I meant even if I accidentally used the wrong term. I think intercepting is better, more precise. Based on the quoted sections, it seems to me like an intercepting proxy is a superset of a transparent proxy. those two are completely separate, proxy may be intercepting and modify content (e.g. filter), including squid. Aside: I've long been a fan of and preferred explicit client configuration to use a proxy. yes, especially PAC scripts are great to explicitly state what you need, including using socks for other than http(s)/ftp connections (direct smtp,imap,pop3 over socks) and of course socks is generic bidiretional tcp/udp proxy, which makes it possible to implement it near over any kind of communication. Yes, SOCKS is bidirectional. However, inbound connections through it, e.g. FTP active connections, are time limited. -- At least I'm not aware of any way to have a SOCKS proxy allow inbound traffic indefinitely a la. port forwarding in NAT or SSH remote port forwarding (assuming the real server is the SSH client). I guess PORT connections have to be allowed on the SOCKS server which is I'd say not common (can be dangerous) passive connections are safe in case of ftp/ssl, where it's impossible to know for the proxy/firewall who connects where. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. There's a long-standing bug relating to the x86 architecture that allows you to install Windows. -- Matthew D. Fuller ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 10:18 AM, Matus UHLAR - fantomas wrote: I prefer to explicitly state what one means by transparent because RFC2616 has defined transparent proxy diferently: I do too. I /thought/ that I was explicitly stating. At least that was my intention. Aside: That's why I included my working definition. So hopefully you would know what I meant even if I accidentally used the wrong term. A "transparent proxy" is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification. term "interception proxy" better defines what happens here: Instead, an interception proxy filters or redirects outgoing TCP port 80 packets (and occasionally other common port traffic). It seems as if I should (re)read RFC 2616 and refine my use of terms. Based on the quoted sections, it seems to me like an intercepting proxy is a superset of a transparent proxy. Aside: I can see a conceptual way to not modify any of the TCP connection (source & destination IPs & ports) while still actively proxying the traffic. -- I don't know if Squid supports this or not. But I do see conceptually what would be done. FYI, Intercepting proxy must use measures to avoid host header forgery: https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery https://www.kb.cert.org/vuls/id/435052 I'll have to read those. squid must find out the original destination IP used and check, while in explicit mode it makes no sense. I'll have to think about that. Probably more so after reading the links you provided. Aside: I've long been a fan of and preferred explicit client configuration to use a proxy. this is a bit different kind of hacks. Generally the SOCKS library know where/how to connect, socks wrappers (like socksify, tsocks, proxychains) are used to make other software use socks proxy even if it does not support it. Agreed. and of course socks is generic bidiretional tcp/udp proxy, which makes it possible to implement it near over any kind of communication. Yes, SOCKS is bidirectional. However, inbound connections through it, e.g. FTP active connections, are time limited. -- At least I'm not aware of any way to have a SOCKS proxy allow inbound traffic indefinitely a la. port forwarding in NAT or SSH remote port forwarding (assuming the real server is the SSH client). -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 2:43 AM, Matus UHLAR - fantomas wrote: if by "transparent" you mean "intercepting" proxy, that is incorrect On 25.10.22 09:47, Grant Taylor wrote: By "transparent" I mean using network techniques to force clients to use a proxy that aren't themselves aware that they are using a proxy. I prefer to explicitly state what one means by transparent because RFC2616 has defined transparent proxy diferently: A "transparent proxy" is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification. term "interception proxy" better defines what happens here: Instead, an interception proxy filters or redirects outgoing TCP port 80 packets (and occasionally other common port traffic). CONNECT is HTTP command designed for use with explicit HTTP proxy. Agreed. But what does Squid do differently after recognizing the request from the client; be it a GET, PUT, POST, or even a CONNECT; the former being transparent with the latter being explicit. Squid will still proxy the request as it understands it dependent on configuration, ACLs, etc. FYI, Intercepting proxy must use measures to avoid host header forgery: https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery https://www.kb.cert.org/vuls/id/435052 squid must find out the original destination IP used and check, while in explicit mode it makes no sense. These are the FTP protocol "hacks" I mentioned before. The HTTP protocol was created with proxying in mind, FTP was not. using specially crafted login name for connecting to anoter server is one of those hacks. Okay. I (mis)took "hacks" to be things more severe like is typically done with proxifiers used with SOCKS servers, e.g. altering / overloading system library calls. this is a bit different kind of hacks. Generally the SOCKS library know where/how to connect, socks wrappers (like socksify, tsocks, proxychains) are used to make other software use socks proxy even if it does not support it. and of course socks is generic bidiretional tcp/udp proxy, which makes it possible to implement it near over any kind of communication. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. LSD will make your ECS screen display 16.7 million colors ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/25/22 2:43 AM, Matus UHLAR - fantomas wrote: if by "transparent" you mean "intercepting" proxy, that is incorrect By "transparent" I mean using network techniques to force clients to use a proxy that aren't themselves aware that they are using a proxy. CONNECT is HTTP command designed for use with explicit HTTP proxy. Agreed. But what does Squid do differently after recognizing the request from the client; be it a GET, PUT, POST, or even a CONNECT; the former being transparent with the latter being explicit. Squid will still proxy the request as it understands it dependent on configuration, ACLs, etc. I currently maintain that there is little difference, other than the VERB used, between transparent and explicit proxy configuration. Squid still largely does the same thing. Or said another way, all Squid needed to do to be able to support both transparent and explicit was to understand the additional VERBs. Much of the rest of the code was unchanged. To me there is not a fundamental difference, beyond initial VERBs, for transparent and explicit configuration. At least not anything like the differences between FTP, HTTP, and ICP. Each of which are fundamentally different protocols. Conversely transparent vs explicit is an extension of one protocol, namely HTTP. ok, there's no explicit need. And since there's no explicit need to use port 80 for HTTP proxy, the convention is to use different port because of reasons stated before. So port 3128 is based on convention. And that convention requires more explicit configuration in clients. Okay. So be it. These are the FTP protocol "hacks" I mentioned before. The HTTP protocol was created with proxying in mind, FTP was not. using specially crafted login name for connecting to anoter server is one of those hacks. Okay. I (mis)took "hacks" to be things more severe like is typically done with proxifiers used with SOCKS servers, e.g. altering / overloading system library calls. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
I do not know exactly what you mean by "https proxy" in this context, but I suspect that you are using the wrong FireFox setting. The easily accessible "HTTPS proxy" setting in the "Configure Proxy Access to the Internet" dialog is _not_ what you >need! That setting configures a plain text HTTP proxy for handling HTTPS traffic. Very misleading, I know. You need a PAC file that tells FireFox to use an HTTPS proxy. See (again) https://wiki.squid-cache.org/Features/HTTPS#Encrypted_browser-Squid_connection which refers to https://bugzilla.mozilla.org/show_bug.cgi?id=378637#c68 skip my previous e-mail On 24.10.22 15:48, LEMRAZZEQ, Wadie wrote: Indeed, I am aware of this bug discussion and I did apply the PAC script into network.proxy.autoconfig_url, and it did not work perhaps syntax error in that script you have pasted? I assume you pasted exactly text as mentioned on: http://lists.squid-cache.org/pipermail/squid-users/2022-October/025315.html or: https://webproxy.diladele.com/docs/network/secure_proxy/browsers/ And what's more misleading is that the bug is tagged resolved, as if starting from firefox 33, it supports https proxy out of the box yes, by using PAC script, or perhaps an extention that configures it instead. foxyproxy was mentioned iirc But anyway, my next step is to use a PAC file, since it is the legacy method, if this doesn't work either I'm gonna use stunnels I know nothing of autoconfig being legacy. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. I intend to live forever - so far so good. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 24.10.22 15:48, LEMRAZZEQ, Wadie wrote: I think this discussion had diverged from its subject So I refocus in our subject, gents I do not know exactly what you mean by "https proxy" in this context, but I suspect that you are using the wrong FireFox setting. The easily accessible "HTTPS proxy" setting in the "Configure Proxy Access to the Internet" dialog is _not_ what you >need! That setting configures a plain text HTTP proxy for handling HTTPS traffic. Very misleading, I know. You need a PAC file that tells FireFox to use an HTTPS proxy. See (again) https://wiki.squid-cache.org/Features/HTTPS#Encrypted_browser-Squid_connection which refers to https://bugzilla.mozilla.org/show_bug.cgi?id=378637#c68 Indeed, I am aware of this bug discussion and I did apply the PAC script into network.proxy.autoconfig_url, and it did not work And what's more misleading is that the bug is tagged resolved, as if starting from firefox 33, it supports https proxy out of the box But anyway, my next step is to use a PAC file, since it is the legacy method legacy? if this doesn't work either I'm gonna use stunnels -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. Quantum mechanics: The dreams stuff is made of. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/21/22 2:25 AM, Matus UHLAR - fantomas wrote: apparently this is a hack to be able to define proxy autoconfig in the location field. Since it has very restricted capabilities, it's apparently non-issue. I guess that you can only define FindProxyForURL() this way. On 21.10.22 11:25, Grant Taylor wrote: From memory, the only effective difference between explicit proxy mode and transparent proxy mode (from Squid's point of view) is the use of the `CONNECT` vs `GET` et al, command and how the hostname is specified. if by "transparent" you mean "intercepting" proxy, that is incorrect CONNECT is HTTP command designed for use with explicit HTTP proxy. I think Adam Meyer also explained it nicely. Yes, Adam said that 3128 is a /convention/. ok, there's no explicit need. And since there's no explicit need to use port 80 for HTTP proxy, the convention is to use different port because of reasons stated before. I repeat, FTP protocol does not support proxies and port 21 would be of low usage here. I remember reading things years ago where people would use a bog standard FTP client to connect to an /FTP/ server acting as an /FTP/ proxy. I believe they then issues `OPEN` commands on the /FTP/ proxy just like they did on their /FTP/ client. -- My understanding was that this had absolutely /nothing/ to do with /HTTP/, neither protocol nor proxy daemon. Nor was it telnet / rlogin / etc. to run a standard ftp client on a bastion host. Though that was also a solution at the time. On 21.10.22 11:51, Grant Taylor wrote: I knew that I had seen something about using an FTP proxy that wasn't HTTP related. I encourage you to read ~/.ncftp/firewall for more details. Conveniently copied below. I'd like to point out two things: 1) The syntax and ports used only reference FTP. 2) The 'NcFTP does NOT support HTTP proxies that do FTP, such as "squid" or Netscape Proxy Server. Why? Because you have to communicate with them using HTTP, and this is a FTP only program.' So ... yes, I am quite certain that there are FTP /proxies/ that are NOT using HTTP. These are the FTP protocol "hacks" I mentioned before. The HTTP protocol was created with proxying in mind, FTP was not. using specially crafted login name for connecting to anoter server is one of those hacks. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. The 3 biggets disasters: Hiroshima 45, Tschernobyl 86, Windows 95 ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/24/22 9:48 AM, LEMRAZZEQ, Wadie wrote: But anyway, my next step is to use a PAC file, since it is the legacy method, if this doesn't work either I'm gonna use stunnels I have (a superset of) the following in my PAC file. It is working perfectly fine for me across multiple browsers and multiple OSs. function FindProxyForURL(url, host) { if ( dnsDomainIs(host, "example.com") || dnsDomainIs(host, "example.net") || dnsDomainIs(host, "example.org") || false ) { return "DIRECT"; } else { return "HTTPS 192.0.2.251:443; PROXY 192.0.2.251:80"; } } N.B. I'm doing TLS Monkey in the Middle with a self signed cert installed as a root CA in my client systems. -- Being able to filter HTTPS content is WONDERFUL. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
I think this discussion had diverged from its subject So I refocus in our subject, gents >I do not know exactly what you mean by "https proxy" in this context, but I >suspect that you are using the wrong FireFox setting. The easily accessible >"HTTPS proxy" setting in the "Configure Proxy Access to the Internet" dialog >is _not_ what you >need! That setting configures a plain text HTTP proxy for >handling HTTPS traffic. Very misleading, I know. >You need a PAC file that tells FireFox to use an HTTPS proxy. >See (again) >https://wiki.squid-cache.org/Features/HTTPS#Encrypted_browser-Squid_connection >which refers to https://bugzilla.mozilla.org/show_bug.cgi?id=378637#c68 Indeed, I am aware of this bug discussion and I did apply the PAC script into network.proxy.autoconfig_url, and it did not work And what's more misleading is that the bug is tagged resolved, as if starting from firefox 33, it supports https proxy out of the box But anyway, my next step is to use a PAC file, since it is the legacy method, if this doesn't work either I'm gonna use stunnels Thank you everyone for your insights Regards, -Original Message- From: squid-users On Behalf Of Rafael Akchurin Sent: Thursday, October 20, 2022 7:34 AM To: Grant Taylor; squid-users@lists.squid-cache.org Subject: Re: [squid-users] FW: Encrypted browser-Squid connection errors ***This mail has been sent by an external source*** The following line set in the Script Address box of the browser proxy configuration will help - no need for a PAC file for quick tests. Be sure to adjust the proxy name and port. data:,function FindProxyForURL(u, h){return "HTTPS proxy.example.lan:8443";} More info at https://webproxy.diladele.com/docs/network/secure_proxy/browsers/ Best regards, Rafael Akchurin Diladele B.V. -Original Message- From: squid-users On Behalf Of Grant Taylor Sent: Thursday, October 20, 2022 2:39 AM To: squid-users@lists.squid-cache.org Subject: Re: [squid-users] FW: Encrypted browser-Squid connection errors On 10/19/22 8:33 AM, Alex Rousskov wrote: > I do not know exactly what you mean by "https proxy" in this context, > but I suspect that you are using the wrong FireFox setting. The easily > accessible "HTTPS proxy" setting in the "Configure Proxy Access to the > Internet" dialog is _not_ what you need! That setting configures a > plain text HTTP proxy for handling HTTPS traffic. Very misleading, I know. +10 to the antiquated UI ~> worse UX. > You need a PAC file that tells FireFox to use an HTTPS proxy. I believe you can use the FoxyProxy add-on to manage this too. -- Grant. . . . unix || die ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/21/22 11:30 PM, Amos Jeffries wrote: Not just convention. AFAICT was formally registered with W3C, before everyone went to using IETF for registrations. Please elaborate on what was formally registered. I've only seen 3128 / 3129 be the default for Squid (and a few things emulating squid). Other proxies of the time, namely Netscape's and Microsoft's counterparts, tended to use 8080. I'd genuinely like to learn more about and understand the history / etymology / genesis of the 3128 / 3129. FYI, discussion started ~30 years ago. ACK The problem: For bandwidth savings HTTP/1.0 defined different URL syntax for origin and relay/proxy requests. The form sent to an origin server lacks any information about the authority. That was expected to be known out-of-band by the origin itself. HTTP/1.1 has attempted several different mechanisms to fix this over the years. None of them has been universally accepted, so the problem remains. The best we have is mandatory Host header which most (but sadly not all) clients and servers use. HTTP/2 cements that design with mandatory ":authority" pseudo-header field. So the problem is "fixed"for native HTTP/2+ traffic. But until HTTP/1.0 and broken HTTP/1.1 clients are all gone the issue will still crop up. I'm not entirely sure what you mean by "the authority". I'm taking it to mean the identity of the service that you are wanting content from. The Host: header comment with HTTP/1.1 is what makes me think this. My understanding is that neither HTTP/0.9 nor HTTP/1.0 had a Host: header and that it was assumed that the IP address you were connecting to conveyed the server that you were wanting to connect to. I have very little technical understanding of HTTP/2 as I've not needed to delve into it and it has largely just worked for me. And ... Squid still only supports HTTP/1.1 and older. Okay. That sort of surprises me. But I have zero knowledge to disagree. More importantly the proxy hostname:port the client is opening TCP connections to may be different from the authority-info specified in the HTTP request message (or lack thereof). My working understanding of what the authority is seems to still work with this. This crosses security boundaries and involves out-of-band information sources at all three endpoints involved in the transaction for the message semantics and protocol negotiations to work properly. I feel like the nature of web traffic tends to frequently, but not always, cross security / administrative boundaries. As such, I don't think that existence of proxies in the communications path alters things much. Please elaborate on what out-of-band information you are describing. The most predominant thing that comes to mind, particularly with HTTP/1.1 and HTTP/2 is name resolution -- ostensibly DNS -- to identify the IP address to connect to. What that text does not say is that when they are omitted by the **user** they are taken from configuration settings in the OS: * the environment variable name provides: - the protocol name ("http" or "HTTPS", aka plain-text or encrypted) - the expected protocol syntax/semantics ("proxy" aka forward-proxy) * the machine /etc/services configuration provides the default port for the named protocol. Ergo the use of /default/ values when values are not specified. I feel like this in a round about way supports my stance that the default ports are perfectly fine to use. Attempting to use a reverse-proxy or origin server such a configuration may work for some messages, but **will** fail due to syntax or semantic errors on others. I question the veracity of that statement. Sure, trying to speak contemporary protocols (HTTP/1.1 or HTTP/2) to an ancient HTTP server is not going to work. But I believe that Squid and Apache HTTPD can be configured to perform all three roles; origin server, reverse proxy, and forward proxy. Aside: Squid might not be a typical origin server in that you can't have it /directly/ serve /typical/ origin content. However I believe it does function as an origin server for things like Squid error pages. Likewise NAT'ing inbound port 443 or port 80 traffic to a forward-proxy will encounter the same types of issues - while it is perfectly fine to do so towards a reverse-proxy or origin server. I believe that is entirely dependent on the capability and configuration of the forward proxy. -- I've done exactly this with Apache HTTPD. Though I've not had the (dis)pleasure of doing so with Squid. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 22/10/22 06:04, Grant Taylor wrote: On 10/20/22 11:58 PM, Adam Majer wrote: It's basically by convention now. Sure. Conventions change over time. Long enough ago 3128 wasn't the conventional port for Squid. Not just convention. AFAICT was formally registered with W3C, before everyone went to using IETF for registrations. Maybe, hopefully, said discussion will spark an idea in at least one person's head and that might turn into something in 10 or 20 years. FYI, discussion started ~30 years ago. The problem: For bandwidth savings HTTP/1.0 defined different URL syntax for origin and relay/proxy requests. The form sent to an origin server lacks any information about the authority. That was expected to be known out-of-band by the origin itself. HTTP/1.1 has attempted several different mechanisms to fix this over the years. None of them has been universally accepted, so the problem remains. The best we have is mandatory Host header which most (but sadly not all) clients and servers use. HTTP/2 cements that design with mandatory ":authority" pseudo-header field. So the problem is "fixed"for native HTTP/2+ traffic. But until HTTP/1.0 and broken HTTP/1.1 clients are all gone the issue will still crop up. And ... Squid still only supports HTTP/1.1 and older. Forward proxies don't sit on regular server ports because they require explicit config on the client. If we're explicitly configuring the client, then what does the port that's chosen have any influence on the explicit configuration? More importantly the proxy hostname:port the client is opening TCP connections to may be different from the authority-info specified in the HTTP request message (or lack thereof). This crosses security boundaries and involves out-of-band information sources at all three endpoints involved in the transaction for the message semantics and protocol negotiations to work properly. Curl's man page is rather convenient and somewhat supportive ~> telling: ``` Using an environment variable to set the proxy has the same effect as using the -x, --proxy option. http_proxy [protocol://][:port] Sets the proxy server to use for HTTP. HTTPS_PROXY [protocol://][:port] Sets the proxy server to use for HTTPS. ``` Notice how the `[:port]` is /optional/? What that text does not say is that when they are omitted by the **user** they are taken from configuration settings in the OS: * the environment variable name provides: - the protocol name ("http" or "HTTPS", aka plain-text or encrypted) - the expected protocol syntax/semantics ("proxy" aka forward-proxy) * the machine /etc/services configuration provides the default port for the named protocol. Attempting to use a reverse-proxy or origin server such a configuration may work for some messages, but **will** fail due to syntax or semantic errors on others. Likewise NAT'ing inbound port 443 or port 80 traffic to a forward-proxy will encounter the same types of issues - while it is perfectly fine to do so towards a reverse-proxy or origin server. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/21/22 2:51 AM, Matus UHLAR - fantomas wrote: I should have added, that squid does support FTP proxying using one of hacks I mentioned (I haven't tested it yet). I think I used Squid's FTP protocol support years ago. And, since this requires other (FTP) protocol than the default (HTTP) at the proxy side, people free to configure it on random port they choose. FTP proxying is so rarely used that it doesn't even have common port besides 21 used for FTP. The fundamental core component of my (sub)thread is that alternate ports aren't /needed/. The default IANA reserved port is perfectly fine. -- Presuming that there isn't any contention or (site local) convention to use a different port. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/21/22 11:25 AM, Grant Taylor wrote: I remember reading things years ago where people would use a bog standard FTP client to connect to an /FTP/ server acting as an /FTP/ proxy. I knew that I had seen something about using an FTP proxy that wasn't HTTP related. I encourage you to read ~/.ncftp/firewall for more details. Conveniently copied below. I'd like to point out two things: 1) The syntax and ports used only reference FTP. 2) The 'NcFTP does NOT support HTTP proxies that do FTP, such as "squid" or Netscape Proxy Server. Why? Because you have to communicate with them using HTTP, and this is a FTP only program.' So ... yes, I am quite certain that there are FTP /proxies/ that are NOT using HTTP. --8<-- # NcFTP firewall preferences # == # # If you need to use a proxy for FTP, you can configure it below. # If you do not need one, leave the ``firewall-type'' variable set # to 0. Any line that does not begin with the ``#'' character is # considered a configuration command line. # # NOTE: NcFTP does NOT support HTTP proxies that do FTP, such as "squid" #or Netscape Proxy Server. Why? Because you have to communicate with #them using HTTP, and this is a FTP only program. # # Types of firewalls: # -- # #type 1: Connect to firewall host, but send "USER u...@real.host.name" # #type 2: Connect to firewall, login with "USER fwuser" and # "PASS fwpassword", and then "USER u...@real.host.name" # #type 3: Connect to and login to firewall, and then use # "SITE real.host.name", followed by the regular USER and PASS. # #type 4: Connect to and login to firewall, and then use # "OPEN real.host.name", followed by the regular USER and PASS. # #type 5: Connect to firewall host, but send # "USER user@fwu...@real.host.name" and # "PASS pass@fwpass" to login. # #type 6: Connect to firewall host, but send # "USER fwu...@real.host.name" and # "PASS fwpass" followed by a regular # "USER user" and # "PASS pass" to complete the login. # #type 7: Connect to firewall host, but send # "USER u...@real.host.name fwuser" and # "PASS pass" followed by # "ACCT fwpass" to complete the login. # #type 8: Connect to firewall host, but send "USER u...@real.host.name:port" # #type 9: Connect to firewall host, but send "USER u...@real.host.name port" # #type 0: Do NOT use a firewall (most users will choose this). # firewall-type=0 # # # # The ``firewall-host'' variable should be the IP address or hostname of # your firewall server machine. # firewall-host=firewall.home.example.net # # # # The ``firewall-user'' variable tells NcFTP what to use as the user ID # when it logs in to the firewall before connecting to the outside world. # firewall-user=fwuser # # # # The ``firewall-password'' variable is the password associated with # the firewall-user ID. If you set this here, be sure to change the # permissions on this file so that no one (except the superuser) can # see your password. You may also leave this commented out, and then # NcFTP will prompt you each time for the password. # firewall-password=fwpass # # # # Your firewall may require you to connect to a non-standard port for # outside FTP services, instead of the internet standard port number (21). # firewall-port=21 # # # # You probably do not want to FTP to the firewall for hosts on your own # domain. You can set ``firewall-exception-list'' to a list of domains # or hosts where the firewall should not be used. For example, if your # domain was ``probe.net'' you could set this to ``.probe.net''. # # If you leave this commented out, the default behavior is to attempt to # lookup the current domain, and exclude hosts for it. Otherwise, set it # to a list of comma-delimited domains or hostnames. The special token # ``localdomain'' is used for unqualified hostnames, so if you want hosts # without explicit domain names to avoid the firewall, be sure to include # that in your list. # firewall-exception-list=.home.example.net,localhost,localdomain # # # # You may also specify passive mode here. Normally this is set in the # regular $HOME/.ncftp/prefs file. This must be set to one of # "on", "off", or "optional", which mean always use PASV, # always use PORT, and try PASV then PORT, respectively. # #passive=on # # # # NOTE: This file was created for you on Sat Jan 21 23:09:26 2017 #by NcFTP 3.2.5. Removing this file will cause the next run of NcFTP #to generate a new one, possibly with more configurable options. # # ALSO: A /etc/ncftp.firewall file, if present, is processed before this file, #and a /etc/ncftp.firewall.fixed file, if present, is processed after. -->8-- -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/21/22 2:25 AM, Matus UHLAR - fantomas wrote: apparently this is a hack to be able to define proxy autoconfig in the location field. Since it has very restricted capabilities, it's apparently non-issue. I guess that you can only define FindProxyForURL() this way. ACK Thank you for the additional details Matus. I know of such servers. I did say /rarely/. ;-) I too have seen them. They are just a disproportionately small number of web and proxy servers. And, HTTP proxy does not even have defined own port so people use random ports or ports commonly used for this service. Sure it does. An HTTP proxy server is an HTTP server. HTTP has port 80 defined. From memory, the only effective difference between explicit proxy mode and transparent proxy mode (from Squid's point of view) is the use of the `CONNECT` vs `GET` et al, command and how the hostname is specified. the beautiful nature of HTTP allows us to define port within URL, That is a very nice convenience. But a /convenience/ does not equate to a /need/. therefore people tend so use separate ports instead of allocating extra IP addresses for proxy usage. That is a convention. But a /convention/ does not equate to a /need/. I think Adam Meyer also explained it nicely. Yes, Adam said that 3128 is a /convention/. convention != need That is FTP through HTTP proxy. Not FTP through FTP proxy. Hum. I want to disagree, but I don't have anything to counter that at the moment. I repeat, FTP protocol does not support proxies and port 21 would be of low usage here. I remember reading things years ago where people would use a bog standard FTP client to connect to an /FTP/ server acting as an /FTP/ proxy. I believe they then issues `OPEN` commands on the /FTP/ proxy just like they did on their /FTP/ client. -- My understanding was that this had absolutely /nothing/ to do with /HTTP/, neither protocol nor proxy daemon. Nor was it telnet / rlogin / etc. to run a standard ftp client on a bastion host. Though that was also a solution at the time. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/20/22 11:58 PM, Adam Majer wrote: It's basically by convention now. Sure. Conventions change over time. Long enough ago 3128 wasn't the conventional port for Squid. It used to be a convention to allow smoking in public / government offices. Now the convention is the exact opposite. Port 3128 has been set as default port by Squid for more than 2 decades. Agreed. Don't expect a change. I'm not expecting a change. At most I was hoping for a discussion about it. Maybe, hopefully, said discussion will spark an idea in at least one person's head and that might turn into something in 10 or 20 years. Secondly, like it was said already, servers and proxies are different things. Semantics are VERY important here. HTTP daemons and proxy daemons are both servers. They just serve slightly different things. And you need to understand the difference between forward and reverse proxies. Agreed. I've been using / leveraging / exploiting (in a good way) a combination of forward and reverse proxies for multiple decades. They are distinctly different, but yet still remarkably similar. Squid, Apache's HTTPD, Nginx, and even contemporary IIS can act as both an HTTP(S) server (a.k.a. reverse proxy) and / or a forward proxy. Reverse proxies can sit on the regular ports because that's their job -- to ask as origins. Forward proxies don't sit on regular server ports because they require explicit config on the client. If we're explicitly configuring the client, then what does the port that's chosen have any influence on the explicit configuration? Curl's man page is rather convenient and somewhat supportive ~> telling: ``` Using an environment variable to set the proxy has the same effect as using the -x, --proxy option. http_proxy [protocol://][:port] Sets the proxy server to use for HTTP. HTTPS_PROXY [protocol://][:port] Sets the proxy server to use for HTTPS. ``` Notice how the `[:port]` is /optional/? Curl (and other things) will default to using the IANA defined port for `[protocol://]` if `[:port]` is unspecified. So ... why do we /need/ to use a different port than what IANA has defined for `[protocol://]`? I'm genuinely asking why we /need/ to use a different port. What, other than convention or even port contention, is prompting us to use a port other than what IANA has defined for the protocol? And don't forget we used to have transparent proxies which kind of died (I think?) thanks to TLS. I question the veracity of /used/ /to/. Yes, TLS made things more difficult. But in a corporate (like) environment doing TLS monkey in the middle is quite possible with Squid. I am and have been doing exactly that on my personal devices for the last two years. Port 3128 is for *forward* proxy setup. That's by convention / Squid default. I've run forward HTTP proxies on port 80 and forward HTTPS proxies on port 443 for years without any problems. What's more is that it simplifies the client configuration by removing the need to specify the port. The following works perfectly fine for curl, et al. export http_proxy="proxy.home.example" So -- again -- why do we /need/ to use a different port? I fully acknowledge /contention/ and /contention/. If that's the answer to the question, then so be it. But I'm not yet convinced of such. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/20/22 9:49 AM, Matus UHLAR - fantomas wrote: Also, FTP protocol (port 21) does not support proxying, and using FTP proxy usually involves hacks. On 20.10.22 10:14, Grant Taylor wrote: I completely disagree. I've been using FTP through proxies for years. Firefox (and Thunderbird) has an option /specifically/ for using FTP through proxies. As depicted in the the picture of Firefox on the page that Rafael A. linked to. On 21.10.22 10:25, Matus UHLAR - fantomas wrote: That is FTP through HTTP proxy. Not FTP through FTP proxy. I repeat, FTP protocol does not support proxies and port 21 would be of low usage here. I should have added, that squid does support FTP proxying using one of hacks I mentioned (I haven't tested it yet). And, since this requires other (FTP) protocol than the default (HTTP) at the proxy side, people free to configure it on random port they choose. FTP proxying is so rarely used that it doesn't even have common port besides 21 used for FTP. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. On the other hand, you have different fingers. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/20/22 9:49 AM, Matus UHLAR - fantomas wrote: proxy autoconfig is javascript-based but uses very limited javascript. On 20.10.22 10:14, Grant Taylor wrote: My comment was more directed at why is $LANGUAGE_DOESNT_MATTER used /in/ /the/ /location/ /field/? apparently this is a hack to be able to define proxy autoconfig in the location field. Since it has very restricted capabilities, it's apparently non-issue. I guess that you can only define FindProxyForURL() this way. because standard servers and not proxies usually run on standard ports. I trust that you don't intend it to be, but that feels like a non-answer to me. That's sort of tantamount to saying "I drive on the shoulder because there are cards on the road." HTTP(S) connections /are/ the HTTP protocol and the standard port for HTTP protocol is port 80 for unencrypted connections and port 443 for encrypted connections. I rarely see a web server and a proxy server (as in different service daemons) run /on/ /the/ /same/ /system/. As such there is no conflict I know of such servers. And, HTTP proxy does not even have defined own port so people use random ports or ports commonly used for this service. Then there is the entire different class where the same daemon functions as the web server and the proxy server. Apache's HTTPD and Nginx immediately come to mind as fulfilling both functions. So ... I feel like "de-conflicting ports" is as true as "having to have different IPs for different TLS certificates". the beautiful nature of HTTP allows us to define port within URL, and therefore people tend so use separate ports instead of allocating extra IP addresses for proxy usage. I think Adam Meyer also explained it nicely. Also, FTP protocol (port 21) does not support proxying, and using FTP proxy usually involves hacks. I completely disagree. I've been using FTP through proxies for years. Firefox (and Thunderbird) has an option /specifically/ for using FTP through proxies. As depicted in the the picture of Firefox on the page that Rafael A. linked to. That is FTP through HTTP proxy. Not FTP through FTP proxy. I repeat, FTP protocol does not support proxies and port 21 would be of low usage here. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. I drive way too fast to worry about cholesterol. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/20/22 18:14, Grant Taylor wrote: On 10/20/22 9:49 AM, Matus UHLAR - fantomas wrote: because standard servers and not proxies usually run on standard ports. I trust that you don't intend it to be, but that feels like a non-answer to me. It's basically by convention now. Port 3128 has been set as default port by Squid for more than 2 decades. Don't expect a change. Secondly, like it was said already, servers and proxies are different things. And you need to understand the difference between forward and reverse proxies. Reverse proxies can sit on the regular ports because that's their job -- to ask as origins. Forward proxies don't sit on regular server ports because they require explicit config on the client. And don't forget we used to have transparent proxies which kind of died (I think?) thanks to TLS. Port 3128 is for *forward* proxy setup. Cheers, - Adam ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/20/22 9:49 AM, Matus UHLAR - fantomas wrote: proxy autoconfig is javascript-based but uses very limited javascript. My comment was more directed at why is $LANGUAGE_DOESNT_MATTER used /in/ /the/ /location/ /field/? while I agree javascript is not ideal, it's very hard to configure proper proxy configuration without using scripting language. and, when we need scripting language, it's much easier to use something that has been implemented and is used in browsers. I understand and agree with (re)using JavaScript as the chosen language. That's not my concern. (See above.) because standard servers and not proxies usually run on standard ports. I trust that you don't intend it to be, but that feels like a non-answer to me. That's sort of tantamount to saying "I drive on the shoulder because there are cards on the road." HTTP(S) connections /are/ the HTTP protocol and the standard port for HTTP protocol is port 80 for unencrypted connections and port 443 for encrypted connections. I rarely see a web server and a proxy server (as in different service daemons) run /on/ /the/ /same/ /system/. As such there is no conflict between ports on different systems / IPs. The rare case where I do see a web server and a proxy server (still different service daemons) frequently are using different IPs. The proxy is usually listening on a globally routed IP while the web server is listening on the loopback IP. Then there is the entire different class where the same daemon functions as the web server and the proxy server. Apache's HTTPD and Nginx immediately come to mind as fulfilling both functions. So ... I feel like "de-conflicting ports" is as true as "having to have different IPs for different TLS certificates". Also, FTP protocol (port 21) does not support proxying, and using FTP proxy usually involves hacks. I completely disagree. I've been using FTP through proxies for years. Firefox (and Thunderbird) has an option /specifically/ for using FTP through proxies. As depicted in the the picture of Firefox on the page that Rafael A. linked to. All mainstream web browsers have had support for proxying FTP traffic for (at least) 15 of the last 25 years. Up to the point that they started removing FTP protocol support from the browser. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/19/22 11:33 PM, Rafael Akchurin wrote: The following line set in the Script Address box of the browser proxy configuration will help - no need for a PAC file for quick tests. Be sure to adjust the proxy name and port. data:,function FindProxyForURL(u, h){return "HTTPS proxy.example.lan:8443";} On 20.10.22 09:14, Grant Taylor wrote: Is it just me, or is it slightly disturbing that JavaScript in a configurations property box is being executed? proxy autoconfig is javascript-based but uses very limited javascript. while I agree javascript is not ideal, it's very hard to configure proper proxy configuration without using scripting language. and, when we need scripting language, it's much easier to use something that has been implemented and is used in browsers. More info at https://webproxy.diladele.com/docs/network/secure_proxy/browsers/ Aside: Why the propensity of running the HTTP, HTTPS, FTP, and SOCKS proxies on non-standard ports? Why not run them on their standard ports; 80, 443, 21, and 1080 respectively? because standard servers and not proxies usually run on standard ports. Also, FTP protocol (port 21) does not support proxying, and using FTP proxy usually involves hacks. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. I'm not interested in your website anymore. If you need cookies, bake them yourself. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/19/22 11:33 PM, Rafael Akchurin wrote: The following line set in the Script Address box of the browser proxy configuration will help - no need for a PAC file for quick tests. Be sure to adjust the proxy name and port. data:,function FindProxyForURL(u, h){return "HTTPS proxy.example.lan:8443";} Is it just me, or is it slightly disturbing that JavaScript in a configurations property box is being executed? I guess I had naively assumed that something else, ideally hardened against malicious content, somewhere else is executing the JavaScript retrieved from the PAC file. -- I feel like there should be a separation of responsibilities. More info at https://webproxy.diladele.com/docs/network/secure_proxy/browsers/ Aside: Why the propensity of running the HTTP, HTTPS, FTP, and SOCKS proxies on non-standard ports? Why not run them on their standard ports; 80, 443, 21, and 1080 respectively? I switched to using standard ports years ago to simplify configuring HTTP proxy support in Ubuntu installers; "http://proxy.example.net/";, no need to fiddle with the port. Or if you have DNS search domains configured, "http://proxy/"; is sufficient. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
The following line set in the Script Address box of the browser proxy configuration will help - no need for a PAC file for quick tests. Be sure to adjust the proxy name and port. data:,function FindProxyForURL(u, h){return "HTTPS proxy.example.lan:8443";} More info at https://webproxy.diladele.com/docs/network/secure_proxy/browsers/ Best regards, Rafael Akchurin Diladele B.V. -Original Message- From: squid-users On Behalf Of Grant Taylor Sent: Thursday, October 20, 2022 2:39 AM To: squid-users@lists.squid-cache.org Subject: Re: [squid-users] FW: Encrypted browser-Squid connection errors On 10/19/22 8:33 AM, Alex Rousskov wrote: > I do not know exactly what you mean by "https proxy" in this context, > but I suspect that you are using the wrong FireFox setting. The easily > accessible "HTTPS proxy" setting in the "Configure Proxy Access to the > Internet" dialog is _not_ what you need! That setting configures a > plain text HTTP proxy for handling HTTPS traffic. Very misleading, I know. +10 to the antiquated UI ~> worse UX. > You need a PAC file that tells FireFox to use an HTTPS proxy. I believe you can use the FoxyProxy add-on to manage this too. -- Grant. . . . unix || die ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/19/22 8:33 AM, Alex Rousskov wrote: I do not know exactly what you mean by "https proxy" in this context, but I suspect that you are using the wrong FireFox setting. The easily accessible "HTTPS proxy" setting in the "Configure Proxy Access to the Internet" dialog is _not_ what you need! That setting configures a plain text HTTP proxy for handling HTTPS traffic. Very misleading, I know. +10 to the antiquated UI ~> worse UX. You need a PAC file that tells FireFox to use an HTTPS proxy. I believe you can use the FoxyProxy add-on to manage this too. -- Grant. . . . unix || die smime.p7s Description: S/MIME Cryptographic Signature ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/19/22 09:53, LEMRAZZEQ, Wadie wrote: As you can see firefox sends a plain text CONNECT request, and I did parameter https proxy in firefox settings I do not know exactly what you mean by "https proxy" in this context, but I suspect that you are using the wrong FireFox setting. The easily accessible "HTTPS proxy" setting in the "Configure Proxy Access to the Internet" dialog is _not_ what you need! That setting configures a plain text HTTP proxy for handling HTTPS traffic. Very misleading, I know. You need a PAC file that tells FireFox to use an HTTPS proxy. See (again) https://wiki.squid-cache.org/Features/HTTPS#Encrypted_browser-Squid_connection which refers to https://bugzilla.mozilla.org/show_bug.cgi?id=378637#c68 HTH, Alex. On 10/19/22 09:53, LEMRAZZEQ, Wadie wrote: On 10/18/22 04:55, LEMRAZZEQ, Wadie wrote: I have problem only web browsers (Firefox, chromium), and I do specify to use https proxy in the browser proxy config But if I use curl, it works ERROR: failure while accepting a TLS connection on conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1: connection: conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1 Error.cc(22) update: recent: ERR_SECURE_ACCEPT_FAIL/SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=1408F09B+TLS _I O_ERR=1 According to "openssl errstr", that OpenSSL error is: error:1408F09B:SSL routines:ssl3_get_record:https proxy request Most likely, the client is sending a plain text CONNECT request before encrypting the TLS connection to the HTTPS proxy. In other words, the client thinks it is talking to an HTTP proxy while > you want it to think that it is talking to an HTTPS proxy. For example, * HTTP proxy: curl -x http://172.17.0.2:3128/ ... https://example.com * HTTPS proxy: curl -x https://172.17.0.2:3129/ ... https://example.com Yes indeed, requesting with curl works unless the web browsers As far as I can tell based on the information you have provided, your browser is not doing what you want it to do. I can only speculate that the browser is misconfigured. You can confirm what the browser is doing by looking at browser-Squid packets using wireshark or a similar tool. If you see an HTTP CONNECT requests sent to Squid over a plain text TCP connection, then your browser is _not_ configured to use an HTTPS proxy (or is buggy). The browser should be opening a TCP connection and then initiating a TLS handshake. Yes, that's what I did Here is the capture of firefox: https://i.stack.imgur.com/NNnGx.png And here the capture of curl: https://i.stack.imgur.com/OxJJ3.png As you can see firefox sends a plain text CONNECT request, and I did parameter https proxy in firefox settings If it is a browser bug, firefox team resolved this compatibility issue a while ago: https://bugzilla.mozilla.org/show_bug.cgi?id=378637#c68 But still the issue persists or I did miss something Thank you Regards, This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/18/22 04:55, LEMRAZZEQ, Wadie wrote: >>> I have problem only web browsers (Firefox, chromium), and I do specify >>> to use https proxy in the browser proxy config But if I use curl, it >>> works ERROR: failure while accepting a TLS connection on conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1: connection: conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1 Error.cc(22) update: recent: ERR_SECURE_ACCEPT_FAIL/SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=1408F09B+TLS _I O_ERR=1 >>> According to "openssl errstr", that OpenSSL error is: >>> error:1408F09B:SSL routines:ssl3_get_record:https proxy request >>> Most likely, the client is sending a plain text CONNECT request >>> before encrypting the TLS connection to the HTTPS proxy. In other >>> words, the client thinks it is talking to an HTTP proxy while > you >>> want it to think that it is talking to an HTTPS proxy. For example, >>> >>> * HTTP proxy: curl -x http://172.17.0.2:3128/ ... >>> https://example.com >>> * HTTPS proxy: curl -x https://172.17.0.2:3129/ ... >>> https://example.com >> Yes indeed, requesting with curl works unless the web browsers > As far as I can tell based on the information you have provided, your browser > is not doing what you want it to do. I can only speculate that the browser is > misconfigured. > You can confirm what the browser is doing by looking at browser-Squid packets > using wireshark or a similar tool. If you see an HTTP CONNECT requests sent > to Squid over a plain text TCP > connection, then your browser is _not_ configured to use an HTTPS proxy (or > is buggy). The browser should be opening a TCP connection and then initiating > a TLS handshake. Yes, that's what I did Here is the capture of firefox: https://i.stack.imgur.com/NNnGx.png And here the capture of curl: https://i.stack.imgur.com/OxJJ3.png As you can see firefox sends a plain text CONNECT request, and I did parameter https proxy in firefox settings If it is a browser bug, firefox team resolved this compatibility issue a while ago: https://bugzilla.mozilla.org/show_bug.cgi?id=378637#c68 But still the issue persists or I did miss something Thank you Regards, This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/18/22 04:55, LEMRAZZEQ, Wadie wrote: I have problem only web browsers (Firefox, chromium), and I do specify to use https proxy in the browser proxy config But if I use curl, it works ERROR: failure while accepting a TLS connection on conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1: connection: conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1 Error.cc(22) update: recent: ERR_SECURE_ACCEPT_FAIL/SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=1408F09B+TLS_I O_ERR=1 According to "openssl errstr", that OpenSSL error is: error:1408F09B:SSL routines:ssl3_get_record:https proxy request Most likely, the client is sending a plain text CONNECT request before encrypting the TLS connection to the HTTPS proxy. In other words, the client thinks it is talking to an HTTP proxy while > you want it to think that it is talking to an HTTPS proxy. For example, * HTTP proxy: curl -x http://172.17.0.2:3128/ ... https://example.com * HTTPS proxy: curl -x https://172.17.0.2:3129/ ... https://example.com Yes indeed, requesting with curl works unless the web browsers As far as I can tell based on the information you have provided, your browser is not doing what you want it to do. I can only speculate that the browser is misconfigured. You can confirm what the browser is doing by looking at browser-Squid packets using wireshark or a similar tool. If you see an HTTP CONNECT requests sent to Squid over a plain text TCP connection, then your browser is _not_ configured to use an HTTPS proxy (or is buggy). The browser should be opening a TCP connection and then initiating a TLS handshake. HTH, Alex. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
> On 10/14/22 10:32, LEMRAZZEQ, Wadie wrote: >> I tried to implement this on a dockerized Alpine, and a squid 5.5 with >> openssl module > FWIW, Squid v5.5 is unusable in many environments -- too many bugs. Use > v5.7 or later. I do not know whether one of those bugs are responsible for > the specific problem you are discussing though. I tried with squid 5.7, but still have the same issue >> but when I request squid https port, I got this error every time, in >> cache.log: > _How_ do you "request squid https port"? Ah sorry didn't mentioned that I have problem only web browsers (Firefox, chromium), and I do specify to use https proxy in the browser proxy config But if I use curl, it works >> ERROR: failure while accepting a TLS connection on conn77 >> local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1: >> >> connection: conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 >> flags=1 >> >> Error.cc(22) update: recent: >> ERR_SECURE_ACCEPT_FAIL/SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=1408F09B+TLS_I >> O_ERR=1 > According to "openssl errstr", that OpenSSL error is: > error:1408F09B:SSL routines:ssl3_get_record:https proxy request > Most likely, the client is sending a plain text CONNECT request before > encrypting the TLS connection to the HTTPS proxy. In other words, the client > thinks it is talking to an HTTP proxy while > you want it to think that it is > talking to an HTTPS proxy. For example, > * HTTP proxy: curl -x http://172.17.0.2:3128/ ... https://example.com > * HTTPS proxy: curl -x https://172.17.0.2:3129/ ... https://example.com Yes indeed, requesting with curl works unless the web browsers > ... > > I also tried this with squid 4.10 with gnutls module, in an Ubuntu > 20.40 environment, with the same squid.conf, and I got again a TLS > error > > ... > > client_side.cc(2597) tlsAttemptHandshake: Error negotiating TLS on > local=x.x.x.x:3129 remote=x.x.x.x:50874 FD 11 flags=1: Aborted by > client: An unexpected TLS packet was received. > > ... > > I used for certificates, a self signed one, and a generated > certificate signed by our CA, for both scenarios > > Also, I tried multiple https_port options (disable some SSL > implementation, manipulation of client certificates...) but without > success ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] FW: Encrypted browser-Squid connection errors
On 10/14/22 10:32, LEMRAZZEQ, Wadie wrote: I tried to implement this on a dockerized Alpine, and a squid 5.5 with openssl module FWIW, Squid v5.5 is unusable in many environments -- too many bugs. Use v5.7 or later. I do not know whether one of those bugs are responsible for the specific problem you are discussing though. in squid.conf, I have: ... http_port 3128 https_port 3129 cert=/etc/squid/crt.pem key=/etc/squid/key.pem OK. but when I request squid https port, I got this error every time, in cache.log: _How_ do you "request squid https port"? ERROR: failure while accepting a TLS connection on conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1: connection: conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1 Error.cc(22) update: recent: ERR_SECURE_ACCEPT_FAIL/SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=1408F09B+TLS_IO_ERR=1 According to "openssl errstr", that OpenSSL error is: error:1408F09B:SSL routines:ssl3_get_record:https proxy request Most likely, the client is sending a plain text CONNECT request before encrypting the TLS connection to the HTTPS proxy. In other words, the client thinks it is talking to an HTTP proxy while you want it to think that it is talking to an HTTPS proxy. For example, * HTTP proxy: curl -x http://172.17.0.2:3128/ ... https://example.com * HTTPS proxy: curl -x https://172.17.0.2:3129/ ... https://example.com HTH, Alex. ... I also tried this with squid 4.10 with gnutls module, in an Ubuntu 20.40 environment, with the same squid.conf, and I got again a TLS error ... client_side.cc(2597) tlsAttemptHandshake: Error negotiating TLS on local=x.x.x.x:3129 remote=x.x.x.x:50874 FD 11 flags=1: Aborted by client: An unexpected TLS packet was received. ... I used for certificates, a self signed one, and a generated certificate signed by our CA, for both scenarios Also, I tried multiple https_port options (disable some SSL implementation, manipulation of client certificates...) but without success ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] FW: Encrypted browser-Squid connection errors
Hello, I'm trying to set up an encrypted communication between the browser and squid theoretically, I followed this section to implement it : https://wiki.squid-cache.org/Features/HTTPS#Encrypted_browser-Squid_connection I tried to implement this on a dockerized Alpine, and a squid 5.5 with openssl module in squid.conf, I have: ... http_port 3128 https_port 3129 cert=/etc/squid/crt.pem key=/etc/squid/key.pem ... but when I request squid https port, I got this error every time, in cache.log: ... ERROR: failure while accepting a TLS connection on conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1: 0x7fbd208f33e0*1 connection: conn77 local=172.17.0.2:3129 remote=172.17.0.1:56608 FD 12 flags=1 Pipeline.cc(31) front: Pipeline 0x7fbd208f13a0 empty Error.cc(22) update: recent: ERR_SECURE_ACCEPT_FAIL/SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=1408F09B+TLS_IO_ERR=1 ... I also tried this with squid 4.10 with gnutls module, in an Ubuntu 20.40 environment, with the same squid.conf, and I got again a TLS error ... client_side.cc(2597) tlsAttemptHandshake: Error negotiating TLS on local=x.x.x.x:3129 remote=x.x.x.x:50874 FD 11 flags=1: Aborted by client: An unexpected TLS packet was received. ... I used for certificates, a self signed one, and a generated certificate signed by our CA, for both scenarios Also, I tried multiple https_port options (disable some SSL implementation, manipulation of client certificates...) but without success Am I missing something in the squid configuration? This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users