Re: [squid-users] OT - myportname ACL

2021-09-05 Thread Grant Taylor

On 9/4/21 1:58 PM, Alex Rousskov wrote:
the best way is to name your ports and use the myportname ACL instead 
of trying to match one of the many port numbers associated with 
transparent connections, especially when Squid has a tendency to 
"swap" source and destination addresses in that context.


Please forgive my ignorance.  Will you please provide an example of what 
you mean by "use the myportname ACL"?  --  I'm relatively new to Squid 
(4.x) after having not used it for about 15 years.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] OT - myportname ACL

2021-09-06 Thread Grant Taylor

On 9/6/21 1:28 PM, Alex Rousskov wrote:

 http_port ... name=PortGettingGreenTraffic

 acl greenTraffic myportname PortGettingGreenTraffic

 whatever_directive ... greenTraffic


Interesting.  I'll have to do some reading ~> learning to understand 
better.  But I believe you have given me sufficient bread crumbs to chase.


Thank you. Alex.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Terminating Reverse Proxy with Referral Tracking

2021-09-14 Thread Grant Taylor

On 9/12/21 10:16 PM, Mehrdad Fatemi wrote:

Hi Everyone,


Hi,

TL;DR:  Proxy Auto Configuration

I'm looking for an elegant technology option to have telcos zero-rate 
all of the traffic to a set of online destinations.


I assume that "zero rating" means that specific destinations, e.g. the 
proxy server, are no-charge from the telco's customers point of view. 
If this is not what you are meaning, please correct me.


Using an SSL terminating reverse proxy could be a potential answer 
to this as we can focus on zero-rating the proxy's downstream traffic 
with each ISP/Telco without worrying about upstream servers.


I have concerns about "SSL terminating".  It sounds to me like you are 
decidedly outside of the typical enterprise or home network scenario 
where you are wanting to terminate / intercept / bump-in-the-wire TLS 
connections.  As such, I have *SERIOUS* /concerns/ about the security 
implications of this.  --  But, I'm going to assume that you are well 
aware of the implications and are addressing them properly.  But I'd be 
remiss to not say something.  Moving on.


Aside:  I sat on this message for a few days while messing with my own 
TLS bump-in-the-wire /in/ /my/ /house/ on my /home/ network.  As such, 
I'm perfectly fine with TLS termination within environments that have 
the authority to do so.  ;-)  --  I sat on the message while working on 
my own Proxy Auto Configuration script to have multiple clients do what 
I want.


Further aside:  I'm *EXTREMELY* happy with Squid's support for TLS 
bump-in-the-wire for my use cases; allowing ancient clients to use Squid 
as an encryption gateway between SSL3 / TLSv1 / TLSv1.1 and TLSv1.2 / 
TLSv1.3.  The ability to filter various things like tracking pixels, and 
the caching is wonderful.  --  I can't quite wrap my head around why 
Squid improves performance on a GPON connection, but it does.  I would 
have thought that 1 Gbps connection would negate the need for local caching.



There are two challenges to address here though:
1) Modern web applications on the upstream servers use many 3rd party 
and X-a-a-S resources  (e.g. embedded media, libraries, etc) that we 
also want to pass through the proxy to ensure they are zero-rated.


That's going to be a game of Whack-a-Mole.

There's also the possibility that you will proxy ~> zero-rate some 
common library that many other sites that don't pass through your 
infrastructure will use.  So I suspect it's an impure WaM game at best.


2) For a user to complete an end-to-end process they may get referred to 
3rd party websites (like a payment gateway) that we only want to 
zero-rate if the referral is from one of our designated upstream servers.


I suspect that trying to integrate conditional behavior based on account 
balance is going to be ... tricky, if not problematic.  I'd suggest 
worrying about that after the fact or at a later point in the process.


Any advice on whether and how Squid and other related technologies could 
help is much appreciated.


I feel like a judicious use of a Proxy Auto Configuration (PAC) file / 
script may be a good start.  It should be relatively easy for 
subscribers to configure their devices to utilize.  Then you can update 
the PAC file as the WaM game requires.  The PAC has the added advantage 
that you can direct proxy traffic to different proxy servers as necessary.


As for normal (forward) vs reverse proxy is concerned, it seems to me 
like your proxy will be acting as both, a reverse proxy / accelerator 
/and/ a /conditional/ forward proxy.  The conditionality is based on the 
result of the PAC file's FindProxyForURL() function.  You are in some 
ways acting as a reverse proxy / accelerator for specific sites.  You 
are also acting as a forward proxy for clients.  The behaviors just 
overlap in your use case.  The secret sauce is in the PAC file; what 
does and does not get sent to your proxy.


Seeing as how you are dealing with subscribers, you probably do not want 
the closely related / largely overlapping Web Proxy Auto Discovery 
(WPAD) functionality.  IMHO WPAD points to a PAC file.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Terminating Reverse Proxy with Referral Tracking

2021-09-14 Thread Grant Taylor

On 9/14/21 7:12 PM, Grant Taylor wrote:
I have concerns about "SSL terminating".  It sounds to me like you are 
decidedly outside of the typical enterprise or home network scenario 
where you are wanting to terminate / intercept / bump-in-the-wire TLS 
connections.  As such, I have *SERIOUS* /concerns/ about the security 
implications of this.  --  But, I'm going to assume that you are well 
aware of the implications and are addressing them properly.  But I'd be 
remiss to not say something.  Moving on.


I meant to add, I'm not convinced that you /need/ to do TLS termination.

Or said another way, I'm not convinced that simply proxying CONNECT 
requests isn't sufficient.


Do you actually /need/ to terminate the TLS?  Or is simply proxying the 
CONNECT request sufficient?  Can you stay out of the TLS stream, thereby 
avoiding any and all security concerns associated with TLS termination?


Proxies have been passing TLS traffic for decades without TLS termination.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Terminating Reverse Proxy with Referral Tracking

2021-09-14 Thread Grant Taylor

On 9/14/21 6:09 PM, Amos Jeffries wrote:
b) If those upstream servers are embedding URLs for clients to directly 
contact the XaaS services. Then your desire is not possible without 
redesigning the upstream service(s) such that they stop exposing their 
use of the XaaS. Which often also means redesigning the XaaS service 
itself too.


I don't know about Squid, but I do know that it's possible to manipulate 
traffic with Apache in a similar role.  I've done so a number of times 
using the mod_proxy and associated mod_proxy_html modules.  This allows 
Apache to re-write content as it's passing through the Apache proxy.


I wonder if Squid's ICAP support might allow something to modify traffic 
as it passes through the Squid proxy.


That is not possible for a reverse-proxy to do. It will never see the 
third-party traffic, as mentioned by (b) above.


Sure it is.  }:-)



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Redirecting URLs on HTTPS traffic

2021-09-22 Thread Grant Taylor

On 9/22/21 6:44 AM, roee klinger wrote:

Hello,


Hi,

I have an internal network in our office where we want to redirect every 
google search to a Duckduckgo search instead, I already have a script 
written that knows how to take the Google URL and convert it to Duckduckgo.


I am reading about how to implement it on Squid, however everything I 
can find is only referring to HTTP traffic, not HTTPS.


Is that possible to do using HTTPS?


I've not done what you are asking about.  However, based on the 
following, I do believe that it is possible to do what you are asking about.


1)  I've read about a couple different options to do redirection:
a)  Redirection via Squid directives in squid.conf.
b)  Use ICAP to modify the traffic.
2)  TLS bump-in-the-wire to get into the HTTPS stream and apply #1.
I've got this working -- quite well -- at home.

#2 is probably your biggest hurtle.  I don't think it's /hard/, but 
there are nuances to it.

 - How you do the TLS BitW; peek vs snoop, when you do it.
 - The security / legality implications of intercepting TLS connections.
 - The logistics in installing the Root CA's public key that Squid uses.

But I believe what you are wanting to do is imminently possible.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-24 Thread Grant Taylor

On 9/24/21 3:18 PM, Alex Rousskov wrote:
If it is correct, then it is not clear how the change of an IP address 
would affect those making API requests using the domain name, and 
what role Squid is playing here.


To build on Alex's good question, are the API client's sending the API 
calls /through/ Squid?  Or are they configured to bypass Squid?


The mechanism the client's use seems extremely germane here.  E.g. if 
the client's are configured to bypass the proxy for fred.mydomain.com. 
Or if client's use Squid and Squid has been configured to access 
fred.mydomain.com on behalf of clients, and how.


We need more information to be able to say anything meaningful, much 
less helpful.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-24 Thread Grant Taylor

On 9/24/21 3:26 PM, Mike Yates wrote:

Ok so let's say the new server outside the dmz has a different name.


Are you going to re-configure the clients to use the new / different 
name?  Or do you need to re-configure either the intermediate Squid or 
the target; Fred, also running squid, to translate from the old API 
hostname to the new / different hostname?


I need a squid server configuration that will just forward the api 
calls to an external address.  So my internal servers will still point 
to Fred ( which is now a squid server and has access to the outside 
world) and will then forward the requests to the new server I have in 
the cloud.


Are there two Squid servers in play now that Fred is running Squid?

Is there a proxy server, Squid or otherwise, between clients and Fred? 
Or is Fred the Squid server that you were referencing in your emails?



Long story short I just need a pass through squid server.


I suspect you might need to do more than simply pass the requests 
through.  It sounds to me like you need to translates requests for 
https://fred.mydomain.com/api/event to https://dmz.mydomain.com/api/event.


It seems to me like you are going to want to configure Squid on Fred to 
act as a Reverse Proxy (Accelerator).


Link - Reverse Proxy Mode
 - https://wiki.squid-cache.org/SquidFaq/ReverseProxy



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-27 Thread Grant Taylor

On 9/27/21 7:32 AM, Mike Yates wrote:

Let me ask this then 

I just want squid to redirect any requests (http for instance) to 
a specific external url so for instance http://mysuidserver:80 to 
http://externalserver:80 ...


What does "redirect" mean in this context?

Is it an HTTP 301 / 302 / 307 / 308 redirect?

Is it a JavaScript window.location.href change?

Is it some sort of game with TCP connections?

Is it an HTTP proxy that receives requests to one name and proxies 
requests to another name?


"redirect" is vague and difficult to answer.  All of the above, and 
more, options have significantly different solutions.  What's more is 
that each has their own requirements.



Does that help 


Unfortunately not much.

I'm just sure what is the minimal conf file I would need to achieve 
this ...


You need to better define what you are wanting to do.

As stated in another message, I suspect, but do not know for sure, that 
you want Squid to HTTP proxy requests for one name to be sent to a 
different name.


The kicker is that Squid can do this (at least) a couple of different ways;
1)  Simply initiate TCP connections to the new name and assume that the 
new server knows how to deal with the old name, because the queries will 
still be using the old name.
2)  Intercept the connections to the old name and initiate queries for 
the new name at the new destination.


TLS has implications on both of these methods and serious complications.

I strongly suspect what you want to do can be done.  But you still 
haven't told us enough about how things happen on a protocol level to 
provide a useful answer.


The best that I can give at the moment is the following, as I think it's 
integral to what I think you are wanting to do.


Link - Reverse Proxy Mode
 - https://wiki.squid-cache.org/SquidFaq/ReverseProxy



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sorry if this has been asked but I can't find an answer anywhere ...

2021-09-27 Thread Grant Taylor

On 9/27/21 6:52 AM, Mike Yates wrote:
So my idea is to install a single squid server and redirect the internal 
servers to that url instead of the original one.


Your use of "redirect" sounds like you will be re-configuring the 
clients to connect to the squid server.


Will you be configuring the clients to know that they are using a proxy 
server?  Or will you be using some sort of transparent and / or reverse 
proxy?


Squid will then redirect the post to the correct external server asi 
it is installed on a server that has external access  I hope this 
is possible


What does "redirect" mean in this context?

It sounds like you mean a reverse proxy.  --  Which is what I've given 
you links to documentation for.


Reading between the lines, the client's won't have access to the 
Internet.  So, something like a 301 / 302 / 307 / 308 HTTP redirect 
won't do much good by themselves.


You need to clearly articulate the following:

1)  Are the clients configured to knowingly use a proxy?
 - The communications protocol they use is slightly different.
2)  What hostname are the client's connecting to?
3)  What protocol(s) are the client's using to connect to said hostname?
4)  Is TLS being used on any part of the connection?
5)  What hostname is Squid supposed to connect to?
6)  Will any part of the API URL change /other/ /than/ the hostname?

These six questions have subtle but distinct interaction with each other.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos authentication with multiple squids

2021-10-14 Thread Grant Taylor

On 10/13/21 1:48 PM, Markus Moeller wrote:
The problem lies more in the way how Kerberos proxy authentication 
works. The client uses the proxy name to create a ticket and in this 
case it would be the name of the first proxy e.g. proxy1.internal.  The 
first proxy will pass it through to the authenticating proxy for 
authentication proxy2.internal.


My understanding is that there is a way that a Kerberized service 
(proxy1 in this case) could act as a Kerberos protocol proxy agent (of 
sorts) and ask for a special type of Kerberos ticket on behalf of the 
client (client0) asking it (proxy1) for service which it (proxy1) would 
use when forwarding connections on to another host (proxy2 in this 
case).  Is my general understanding of Kerberos wrong?


Does Squid support such Kerberos protocol proxy agent (term?) support?



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos authentication with multiple squids

2021-10-17 Thread Grant Taylor

On 10/16/21 1:31 PM, Markus Moeller wrote:

I think you talk about a kdc proxy, which is for another case.


I don't think so.  I'm not talking about using a proxy to access the KDC.

I'm talking about using a component of the following scenario:

1)  Client uses traditional username and password to authenticate to an 
IMAP server.
2)  IMAP server uses the provided credentials to request some sort of 
ticket (I don't remember what type) on the user's behalf.
3)  IMAP server uses the ticket on the user's behalf to access the 
user's messages stored on an NFS server.


I'm suggesting that the proxy1 (from the other message) do something on 
the user's behalf to request a ticket for the user that proxy1 can then 
use to authenticate as the user to proxy2.


It's been quite a while since I've read about this so I may be 
completely wrong.  But I distinctly remember there was a way to have an 
intermediate (e.g. IMAP) server accept username and password from 
clients and access a backend file server on the client's behalf in such 
a way that the backend server saw normal kerberized connections.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos authentication with multiple squids

2021-10-17 Thread Grant Taylor

On 10/17/21 10:46 AM, Markus Moeller wrote:
I see,  I think this would mean using Basic Auth to proxy1 which then 
gets a Kerberos ticket for the user to authenticate to proxy2.  This is 
possible, but I would not think it is a good secure option.


I think that we're now talking about the same function.

I don't think that HTTP's Basic (realm) Authentication is required.

My understanding is that you can use Kerberos from clinet0 to proxy1 and 
that proxy1 can use the same mechanism to get a special ticket to 
communicate from proxy1 to proxy2 as the original user.


The scenario I described in the last email was to stet the stage to 
describe where the Kerberos protocol proxying was happening, not the 
method in the client to server part.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos authentication with multiple squids

2021-10-18 Thread Grant Taylor

On 10/17/21 10:57 AM, Grant Taylor wrote:
My understanding is that you can use Kerberos from clinet0 to proxy1 and 
that proxy1 can use the same mechanism to get a special ticket to 
communicate from proxy1 to proxy2 as the original user.


I looked at my copy of Kerberos - The Definitive Guide by Jason Garman 
from O'Reilly and found the following terms that seem to be in play here.


The concept that I'm alluding to seems to be broadly known as 
"credential forwarding".  More specifically there are a couple of 
options / constraints that can be added to a TGT that seem to come into 
play here; forwardable tickets and proxiable tickets.  The latter seems 
to be a subset of the former.


The following quote comes form the Ticket Options section of chapter 3 - 
Protocols.  (Sorry, I don't have a page number when looking at 
O'Reilly's learning portal.)


--8<--
Proxiable tickets -- You can also set the proxiable flag on a ticket. 
Proxiable tickets are similar to forwardable tickets in that they can be 
transferred to another host.  However, a proxiable TGT can only be used 
to acquire further service tickets; it cannot be used to acquire a new 
TGT on the target host.

-->8--

This sounds to me like clinet0 could use a forwardable or proxiable 
ticket when talking to squid1 and squid running on squid1 can get and 
use a service ticket for the user on squid2.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] What is in a name? Squid vs SQuID

2022-10-03 Thread Grant Taylor

Hi,

I ran across the following statement referring to Squid in an ancient 
Sys Admin article talking about Linux Transparent Proxy.


Source Quench Introduced Delay (SQuID) is a popular freeware proxy 
server for UNIX machines (see "Software Resources" sidebar for more 
information).


Where the "Software Resources" sidebar has:


 Squid can be obtained from any of following sites:

http://squid.nalanr.net
ftp://squid.nlanr.net

The Squid Web site also contains a wealth of installation and 
configuration information. Squid manuals are also available from 
this site.


Does anyone know any history on Squid's name?  Was SQuID the proper name 
for the Squid caching proxy at one point in time?  Or is this perhaps a 
bad expansion in the article?


Link - Linux Transparent Proxy (Sys Admin, May 1999, Volume 8, Issue 5, 
Article 3)

 - https://www.muppetwhore.net/sysadmin/html/v08/i05/a3.htm



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] What is in a name? Squid vs SQuID

2022-10-03 Thread Grant Taylor

On 10/3/22 1:35 PM, Alex Rousskov wrote:
There is no relationship between the SQuID concept (RFC 1016) and our 
Squid Cache. I double-checked with Duane, the Squid creator. I bet the 
author of that article thought that Squid is an acronym and found a 
matching acronym in RFC 1016 :-).


Thank you for confirmation of what I suspected was the case.

It was one of those surprising things that cause you to do a double take 
and check things you think you know.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-19 Thread Grant Taylor

On 10/19/22 8:33 AM, Alex Rousskov wrote:
I do not know exactly what you mean by "https proxy" in this context, 
but I suspect that you are using the wrong FireFox setting. The easily 
accessible "HTTPS proxy" setting in the "Configure Proxy Access to the 
Internet" dialog is _not_ what you need! That setting configures a plain 
text HTTP proxy for handling HTTPS traffic. Very misleading, I know.


+10 to the antiquated UI ~> worse UX.


You need a PAC file that tells FireFox to use an HTTPS proxy.


I believe you can use the FoxyProxy add-on to manage this too.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-20 Thread Grant Taylor

On 10/19/22 11:33 PM, Rafael Akchurin wrote:
The following line set in the Script Address box of the browser proxy 
configuration will help - no need for a PAC file for quick tests. Be 
sure to adjust the proxy name and port.


data:,function FindProxyForURL(u, h){return "HTTPS proxy.example.lan:8443";}


Is it just me, or is it slightly disturbing that JavaScript in a 
configurations property box is being executed?


I guess I had naively assumed that something else, ideally hardened 
against malicious content, somewhere else is executing the JavaScript 
retrieved from the PAC file.  --  I feel like there should be a 
separation of responsibilities.



More info at https://webproxy.diladele.com/docs/network/secure_proxy/browsers/


Aside:  Why the propensity of running the HTTP, HTTPS, FTP, and SOCKS 
proxies on non-standard ports?  Why not run them on their standard 
ports; 80, 443, 21, and 1080 respectively?


I switched to using standard ports years ago to simplify configuring 
HTTP proxy support in Ubuntu installers; "http://proxy.example.net/";, no 
need to fiddle with the port.  Or if you have DNS search domains 
configured, "http://proxy/"; is sufficient.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-20 Thread Grant Taylor

On 10/20/22 9:49 AM, Matus UHLAR - fantomas wrote:

proxy autoconfig is javascript-based but uses very limited javascript.


My comment was more directed at why is $LANGUAGE_DOESNT_MATTER used /in/ 
/the/ /location/ /field/?


while I agree javascript is not ideal, it's very hard to configure 
proper proxy configuration without using scripting language.


and, when we need scripting language, it's much easier to use something 
that has been implemented and is used in browsers.


I understand and agree with (re)using JavaScript as the chosen language. 
 That's not my concern.  (See above.)



because standard servers and not proxies usually run on standard ports.


I trust that you don't intend it to be, but that feels like a non-answer 
to me.


That's sort of tantamount to saying "I drive on the shoulder because 
there are cards on the road."


HTTP(S) connections /are/ the HTTP protocol and the standard port for 
HTTP protocol is port 80 for unencrypted connections and port 443 for 
encrypted connections.


I rarely see a web server and a proxy server (as in different service 
daemons) run /on/ /the/ /same/ /system/.  As such there is no conflict 
between ports on different systems / IPs.


The rare case where I do see a web server and a proxy server (still 
different service daemons) frequently are using different IPs.  The 
proxy is usually listening on a globally routed IP while the web server 
is listening on the loopback IP.


Then there is the entire different class where the same daemon functions 
as the web server and the proxy server.  Apache's HTTPD and Nginx 
immediately come to mind as fulfilling both functions.


So ... I feel like "de-conflicting ports" is as true as "having to have 
different IPs for different TLS certificates".


Also, FTP protocol (port 21) does not support proxying, and using FTP 
proxy usually involves hacks.


I completely disagree.

I've been using FTP through proxies for years.  Firefox (and 
Thunderbird) has an option /specifically/ for using FTP through proxies. 
 As depicted in the the picture of Firefox on the page that Rafael A. 
linked to.


All mainstream web browsers have had support for proxying FTP traffic 
for (at least) 15 of the last 25 years.  Up to the point that they 
started removing FTP protocol support from the browser.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-21 Thread Grant Taylor

On 10/20/22 11:58 PM, Adam Majer wrote:

It's basically by convention now.


Sure.

Conventions change over time.

Long enough ago 3128 wasn't the conventional port for Squid.

It used to be a convention to allow smoking in public / government 
offices.  Now the convention is the exact opposite.


Port 3128 has been set as default port by Squid for more than 2 
decades.


Agreed.


Don't expect a change.


I'm not expecting a change.

At most I was hoping for a discussion about it.

Maybe, hopefully, said discussion will spark an idea in at least one 
person's head and that might turn into something in 10 or 20 years.


Secondly, like it was said already, servers and proxies are different 
things.


Semantics are VERY important here.  HTTP daemons and proxy daemons are 
both servers.  They just serve slightly different things.


And you need to understand the difference between forward and reverse 
proxies.


Agreed.  I've been using / leveraging / exploiting (in a good way) a 
combination of forward and reverse proxies for multiple decades.  They 
are distinctly different, but yet still remarkably similar.


Squid, Apache's HTTPD, Nginx, and even contemporary IIS can act as both 
an HTTP(S) server (a.k.a. reverse proxy) and / or a forward proxy.


Reverse proxies can sit on the regular ports because that's their 
job -- to ask as origins.




Forward proxies don't sit on regular server ports because they require 
explicit config on the client.


If we're explicitly configuring the client, then what does the port 
that's chosen have any influence on the explicit configuration?


Curl's man page is rather convenient and somewhat supportive ~> telling:

```
   Using an environment variable to set the proxy has the same 
effect as using the -x, --proxy option.


   http_proxy [protocol://][:port]
  Sets the proxy server to use for HTTP.

   HTTPS_PROXY [protocol://][:port]
  Sets the proxy server to use for HTTPS.
```

Notice how the `[:port]` is /optional/?

Curl (and other things) will default to using the IANA defined port for 
`[protocol://]` if `[:port]` is unspecified.


So ... why do we /need/ to use a different port than what IANA has 
defined for `[protocol://]`?


I'm genuinely asking why we /need/ to use a different port.

What, other than convention or even port contention, is prompting us to 
use a port other than what IANA has defined for the protocol?


And don't forget we used to have transparent proxies which kind of died 
(I think?) thanks to TLS.


I question the veracity of /used/ /to/.

Yes, TLS made things more difficult.  But in a corporate (like) 
environment doing TLS monkey in the middle is quite possible with Squid.


I am and have been doing exactly that on my personal devices for the 
last two years.



Port 3128 is for *forward* proxy setup.


That's by convention / Squid default.

I've run forward HTTP proxies on port 80 and forward HTTPS proxies on 
port 443 for years without any problems.  What's more is that it 
simplifies the client configuration by removing the need to specify the 
port.  The following works perfectly fine for curl, et al.


   export http_proxy="proxy.home.example"

So -- again -- why do we /need/ to use a different port?

I fully acknowledge /contention/ and /contention/.  If that's the answer 
to the question, then so be it.  But I'm not yet convinced of such.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-21 Thread Grant Taylor

On 10/21/22 2:25 AM, Matus UHLAR - fantomas wrote:
apparently this is a hack to be able to define proxy autoconfig in the 
location field.


Since it has very restricted capabilities, it's apparently non-issue.

I guess that you can only define FindProxyForURL() this way.


ACK

Thank you for the additional details Matus.


I know of such servers.


I did say /rarely/.  ;-)  I too have seen them.  They are just a 
disproportionately small number of web and proxy servers.


And, HTTP proxy does not even have defined own port so people use random 
ports or ports commonly used for this service.


Sure it does.  An HTTP proxy server is an HTTP server.  HTTP has port 80 
defined.


From memory, the only effective difference between explicit proxy mode 
and transparent proxy mode (from Squid's point of view) is the use of 
the `CONNECT` vs `GET` et al, command and how the hostname is specified.



the beautiful nature of HTTP allows us to define port within URL,


That is a very nice convenience.  But a /convenience/ does not equate to 
a /need/.


therefore people tend so use separate ports instead of allocating 
extra IP addresses for proxy usage.


That is a convention.  But a /convention/ does not equate to a /need/.


I think Adam Meyer also explained it nicely.


Yes, Adam said that 3128 is a /convention/.

convention != need


That is FTP through HTTP proxy. Not FTP through FTP proxy.


Hum.  I want to disagree, but I don't have anything to counter that at 
the moment.


I repeat, FTP protocol does not support proxies and port 21 would be of 
low usage here.


I remember reading things years ago where people would use a bog 
standard FTP client to connect to an /FTP/ server acting as an /FTP/ 
proxy.  I believe they then issues `OPEN` commands on the /FTP/ proxy 
just like they did on their /FTP/ client.  --  My understanding was that 
this had absolutely /nothing/ to do with /HTTP/, neither protocol nor 
proxy daemon.  Nor was it telnet / rlogin / etc. to run a standard ftp 
client on a bastion host.  Though that was also a solution at the time.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-21 Thread Grant Taylor

On 10/21/22 11:25 AM, Grant Taylor wrote:
I remember reading things years ago where people would use a bog 
standard FTP client to connect to an /FTP/ server acting as an /FTP/ 
proxy.


I knew that I had seen something about using an FTP proxy that wasn't 
HTTP related.


I encourage you to read ~/.ncftp/firewall for more details. 
Conveniently copied below.


I'd like to point out two things:

1)  The syntax and ports used only reference FTP.
2)  The 'NcFTP does NOT support HTTP proxies that do FTP, such as 
"squid" or Netscape Proxy Server.  Why?  Because you have to communicate 
with them using HTTP, and this is a FTP only program.'


So ... yes, I am quite certain that there are FTP /proxies/ that are NOT 
using HTTP.


--8<--
# NcFTP firewall preferences
# ==
#
# If you need to use a proxy for FTP, you can configure it below.
# If you do not need one, leave the ``firewall-type'' variable set
# to 0.  Any line that does not begin with the ``#'' character is
# considered a configuration command line.
#
# NOTE:  NcFTP does NOT support HTTP proxies that do FTP, such as "squid"
#or Netscape Proxy Server.  Why?  Because you have to 
communicate with

#them using HTTP, and this is a FTP only program.
#
# Types of firewalls:
# --
#
#type 1:  Connect to firewall host, but send "USER u...@real.host.name"
#
#type 2:  Connect to firewall, login with "USER fwuser" and
# "PASS fwpassword", and then "USER u...@real.host.name"
#
#type 3:  Connect to and login to firewall, and then use
# "SITE real.host.name", followed by the regular USER and PASS.
#
#type 4:  Connect to and login to firewall, and then use
# "OPEN real.host.name", followed by the regular USER and PASS.
#
#type 5:  Connect to firewall host, but send
# "USER user@fwu...@real.host.name" and
# "PASS pass@fwpass" to login.
#
#type 6:  Connect to firewall host, but send
# "USER fwu...@real.host.name" and
# "PASS fwpass" followed by a regular
# "USER user" and
# "PASS pass" to complete the login.
#
#type 7:  Connect to firewall host, but send
# "USER u...@real.host.name fwuser" and
# "PASS pass" followed by
# "ACCT fwpass" to complete the login.
#
#type 8:  Connect to firewall host, but send "USER 
u...@real.host.name:port"

#
#type 9:  Connect to firewall host, but send "USER 
u...@real.host.name port"

#
#type 0:  Do NOT use a firewall (most users will choose this).
#
firewall-type=0
#
#
#
# The ``firewall-host'' variable should be the IP address or hostname of
# your firewall server machine.
#
firewall-host=firewall.home.example.net
#
#
#
# The ``firewall-user'' variable tells NcFTP what to use as the user ID
# when it logs in to the firewall before connecting to the outside world.
#
firewall-user=fwuser
#
#
#
# The ``firewall-password'' variable is the password associated with
# the firewall-user ID.  If you set this here, be sure to change the
# permissions on this file so that no one (except the superuser) can
# see your password.  You may also leave this commented out, and then
# NcFTP will prompt you each time for the password.
#
firewall-password=fwpass
#
#
#
# Your firewall may require you to connect to a non-standard port for
# outside FTP services, instead of the internet standard port number (21).
#
firewall-port=21
#
#
#
# You probably do not want to FTP to the firewall for hosts on your own
# domain.  You can set ``firewall-exception-list'' to a list of domains
# or hosts where the firewall should not be used.  For example, if your
# domain was ``probe.net'' you could set this to ``.probe.net''.
#
# If you leave this commented out, the default behavior is to attempt to
# lookup the current domain, and exclude hosts for it.  Otherwise, set it
# to a list of comma-delimited domains or hostnames.  The special token
# ``localdomain'' is used for unqualified hostnames, so if you want hosts
# without explicit domain names to avoid the firewall, be sure to include
# that in your list.
#
firewall-exception-list=.home.example.net,localhost,localdomain
#
#
#
# You may also specify passive mode here.  Normally this is set in the
# regular $HOME/.ncftp/prefs file.  This must be set to one of
# "on", "off", or "optional", which mean always use PASV,
# always use PORT, and try PASV then PORT, respectively.
#
#passive=on
#
#
#
# NOTE:  This file was created for you on Sat Jan 21 23:09:26 2017
#by NcFTP 3.2.5.  Removing this file will cause the next run of 
NcFTP

Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-21 Thread Grant Taylor

On 10/21/22 2:51 AM, Matus UHLAR - fantomas wrote:
I should have added, that squid does support FTP proxying using one of 
hacks I mentioned (I haven't tested it yet).


I think I used Squid's FTP protocol support years ago.

And, since this requires other (FTP) protocol than the default (HTTP) at 
the proxy side, people free to configure it on random port they choose.


FTP proxying is so rarely used that it doesn't even have common port 
besides 21 used for FTP.
The fundamental core component of my (sub)thread is that alternate ports 
aren't /needed/.  The default IANA reserved port is perfectly fine.  -- 
Presuming that there isn't any contention or (site local) convention to 
use a different port.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-22 Thread Grant Taylor

On 10/21/22 11:30 PM, Amos Jeffries wrote:
Not just convention. AFAICT was formally registered with W3C, before 
everyone went to using IETF for registrations.


Please elaborate on what was formally registered.  I've only seen 3128 / 
3129 be the default for Squid (and a few things emulating squid).  Other 
proxies of the time, namely Netscape's and Microsoft's counterparts, 
tended to use 8080.


I'd genuinely like to learn more about and understand the history / 
etymology / genesis of the 3128 / 3129.



FYI, discussion started ~30 years ago.


ACK


The problem:

For bandwidth savings HTTP/1.0 defined different URL syntax for origin 
and relay/proxy requests. The form sent to an origin server lacks any 
information about the authority. That was expected to be known 
out-of-band by the origin itself.


HTTP/1.1 has attempted several different mechanisms to fix this over the 
years. None of them has been universally accepted, so the problem 
remains. The best we have is mandatory Host header which most (but sadly 
not all) clients and servers use.


HTTP/2 cements that design with mandatory ":authority" pseudo-header 
field. So the problem is "fixed"for native HTTP/2+ traffic. But until 
HTTP/1.0 and broken HTTP/1.1 clients are all gone the issue will still 
crop up.


I'm not entirely sure what you mean by "the authority".  I'm taking it 
to mean the identity of the service that you are wanting content from. 
The Host: header comment with HTTP/1.1 is what makes me think this.


My understanding is that neither HTTP/0.9 nor HTTP/1.0 had a Host: 
header and that it was assumed that the IP address you were connecting 
to conveyed the server that you were wanting to connect to.


I have very little technical understanding of HTTP/2 as I've not needed 
to delve into it and it has largely just worked for me.



And ... Squid still only supports HTTP/1.1 and older.


Okay.  That sort of surprises me.  But I have zero knowledge to disagree.

More importantly the proxy hostname:port the client is opening TCP 
connections to may be different from the authority-info specified in the 
HTTP request message (or lack thereof).


My working understanding of what the authority is seems to still work 
with this.


This crosses security boundaries and involves out-of-band information 
sources at all three endpoints involved in the transaction for the 
message semantics and protocol negotiations to work properly.


I feel like the nature of web traffic tends to frequently, but not 
always, cross security / administrative boundaries.  As such, I don't 
think that existence of proxies in the communications path alters things 
much.


Please elaborate on what out-of-band information you are describing. 
The most predominant thing that comes to mind, particularly with 
HTTP/1.1 and HTTP/2 is name resolution -- ostensibly DNS -- to identify 
the IP address to connect to.


What that text does not say is that when they are omitted by the 
**user** they are taken from configuration settings in the OS:


  * the environment variable name provides:
     - the protocol name ("http" or "HTTPS", aka plain-text or encrypted)
     - the expected protocol syntax/semantics ("proxy" aka forward-proxy)

  * the machine /etc/services configuration provides the default port 
for the named protocol.


Ergo the use of /default/ values when values are not specified.

I feel like this in a round about way supports my stance that the 
default ports are perfectly fine to use.


Attempting to use a reverse-proxy or origin server such a configuration 
may work for some messages, but **will** fail due to syntax or semantic 
errors on others.


I question the veracity of that statement.

Sure, trying to speak contemporary protocols (HTTP/1.1 or HTTP/2) to an 
ancient HTTP server is not going to work.


But I believe that Squid and Apache HTTPD can be configured to perform 
all three roles; origin server, reverse proxy, and forward proxy.


Aside:  Squid might not be a typical origin server in that you can't 
have it /directly/ serve /typical/ origin content.  However I believe it 
does function as an origin server for things like Squid error pages.


Likewise NAT'ing inbound port 443 or port 80 traffic to a 
forward-proxy will encounter the same types of issues - while it is 
perfectly fine to do so towards a reverse-proxy or origin server.


I believe that is entirely dependent on the capability and configuration 
of the forward proxy.  --  I've done exactly this with Apache HTTPD. 
Though I've not had the (dis)pleasure of doing so with Squid.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-24 Thread Grant Taylor

On 10/24/22 9:48 AM, LEMRAZZEQ, Wadie wrote:
But anyway, my next step is to use a PAC file, since it is the legacy 
method, if this doesn't work either I'm gonna use stunnels


I have (a superset of) the following in my PAC file.

It is working perfectly fine for me across multiple browsers and 
multiple OSs.


function FindProxyForURL(url, host) {
if (
dnsDomainIs(host, "example.com") ||
dnsDomainIs(host, "example.net") ||
dnsDomainIs(host, "example.org") ||
false
) {
return "DIRECT";
} else {
return "HTTPS 192.0.2.251:443; PROXY 192.0.2.251:80";
}
}

N.B. I'm doing TLS Monkey in the Middle with a self signed cert 
installed as a root CA in my client systems.  --  Being able to filter 
HTTPS content is WONDERFUL.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 2:43 AM, Matus UHLAR - fantomas wrote:

if by "transparent" you mean "intercepting" proxy, that is incorrect


By "transparent" I mean using network techniques to force clients to use 
a proxy that aren't themselves aware that they are using a proxy.



CONNECT is HTTP command designed for use with explicit HTTP proxy.


Agreed.

But what does Squid do differently after recognizing the request from 
the client; be it a GET, PUT, POST, or even a CONNECT; the former being 
transparent with the latter being explicit.  Squid will still proxy the 
request as it understands it dependent on configuration, ACLs, etc.


I currently maintain that there is little difference, other than the 
VERB used, between transparent and explicit proxy configuration.  Squid 
still largely does the same thing.


Or said another way, all Squid needed to do to be able to support both 
transparent and explicit was to understand the additional VERBs.  Much 
of the rest of the code was unchanged.


To me there is not a fundamental difference, beyond initial VERBs, for 
transparent and explicit configuration.  At least not anything like the 
differences between FTP, HTTP, and ICP.  Each of which are fundamentally 
different protocols.  Conversely transparent vs explicit is an extension 
of one protocol, namely HTTP.


ok, there's no explicit need. And since there's no explicit need to use 
port 80 for HTTP proxy, the convention is to use different port because 
of reasons stated before.


So port 3128 is based on convention.  And that convention requires more 
explicit configuration in clients.  Okay.  So be it.



These are the FTP protocol "hacks" I mentioned before.
The HTTP protocol was created with proxying in mind, FTP was not.
using specially crafted login name for connecting to anoter server is 
one of those hacks.


Okay.

I (mis)took "hacks" to be things more severe like is typically done with 
proxifiers used with SOCKS servers, e.g. altering / overloading system 
library calls.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 10:18 AM, Matus UHLAR - fantomas wrote:
I prefer to explicitly state what one means by transparent because 
RFC2616 has defined transparent proxy diferently:


I do too.  I /thought/ that I was explicitly stating.  At least that was 
my intention.


Aside:  That's why I included my working definition.  So hopefully you 
would know what I meant even if I accidentally used the wrong term.


A "transparent proxy" is a proxy that does not modify the request 
or response beyond what is required for proxy authentication and 
identification.


term "interception proxy" better defines what happens here:

Instead, an interception proxy filters or redirects outgoing TCP port 
80 packets (and occasionally other common port traffic).


It seems as if I should (re)read RFC 2616 and refine my use of terms.

Based on the quoted sections, it seems to me like an intercepting proxy 
is a superset of a transparent proxy.


Aside:  I can see a conceptual way to not modify any of the TCP 
connection (source & destination IPs & ports) while still actively 
proxying the traffic.  --  I don't know if Squid supports this or not. 
But I do see conceptually what would be done.



FYI, Intercepting proxy must use measures to avoid host header forgery:

https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery
https://www.kb.cert.org/vuls/id/435052


I'll have to read those.

squid must find out the original destination IP used and check, while in 
explicit mode it makes no sense.


I'll have to think about that.  Probably more so after reading the links 
you provided.


Aside:  I've long been a fan of and preferred explicit client 
configuration to use a proxy.



this is a bit different kind of hacks.

Generally the SOCKS library know where/how to connect, socks wrappers 
(like socksify, tsocks, proxychains) are used to make other software use 
socks proxy even if it does not support it.


Agreed.

and of course socks is generic bidiretional tcp/udp proxy, which makes 
it possible to implement it near over any kind of communication.


Yes, SOCKS is bidirectional.  However, inbound connections through it, 
e.g. FTP active connections, are time limited.  --  At least I'm not 
aware of any way to have a SOCKS proxy allow inbound traffic 
indefinitely a la. port forwarding in NAT or SSH remote port forwarding 
(assuming the real server is the SSH client).




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 11:03 AM, Matus UHLAR - fantomas wrote:

I think intercepting is better, more precise.


I think that Squid can be an interception proxy as it can filter / alter 
content.


I also think that Squid (as an interception proxy) can be used 
transparently.



those two are completely separate,


I'm not yet convinced.

proxy may be intercepting and modify content (e.g. filter), including 
squid.


I guess it could be said that the transparency, or modification of 
content, is one aspect and that how the client connects to the proxy, 
explicit or implicit (network magic), could be another aspect.


   +-++
   | transparent | opaque |
+--+-++
| explicit |  2  |   1|
+--+-++
| implicit |  3  |   4|
+--+-++

I believe that Squid can be either transparent and / or opaque depending 
on it's configuration.


I also believe that Squid can be either explicit and / or implicit via 
networking magic.


When I said that intercepting was a superset of transparent, I was 
including all four quadrants.


yes, especially PAC scripts are great to explicitly state what you need, 
including using socks for other than http(s)/ftp connections (direct 
smtp,imap,pop3 over socks)


Yep.

I guess PORT connections have to be allowed on the SOCKS server which is 
I'd say not common (can be dangerous)


Yes, the PORT connection must be allowed.  But the problem that I found 
was that the PORT declaration has a timeout / finite time that they 
would wait for connections.  E.g. ten minutes in the example I was 
looking at.


What's more is that the PORT connections must be declared /per/ 
/expected/ /connection/.  They aren't a generic forward traffic from any 
Internet connected system into the SOCKS client.


passive connections are safe in case of ftp/ssl, where it's impossible 
to know for the proxy/firewall who connects where.


I don't think that it's impossible.  Rather it's just improbable.  It's 
technically possible to do TLS bump in the wire or other things like 
known keys (non-ephemeral / non-PFS) or sharing ephemeral / PFS keys 
from internal server with TLS monkey in the middle proxy.  Such is 
technically possible, just highly improbable.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 10:18 AM, Matus UHLAR - fantomas wrote:

term "interception proxy" better defines what happens here:

Instead, an interception proxy filters or redirects outgoing TCP port 
80 packets (and occasionally other common port traffic).


Where did you pull that quote from?  I don't see "interception" anywhere 
in RFC 2616.


Aside:  I'm thinking that we're having term collisions between "data 
transparency" and "network transparency".  Wherein a data transparent 
proxy doesn't modify the requested content and a network transparent 
proxy is a proxy that the client isn't aware that it's using.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 12:57 PM, Matus UHLAR - fantomas wrote:
That is why I prefer using "intercepting proxy" for case where 
connections between clients and servers intercepted by proxy, without it 
being configured in browsers.


Fair enough.


precisely, so what exactly aren't you convinced about? :-)


The term "transparent" having multiple meanings.

I believe we were talking past each other and now are not.


Have you noticed this with SOCKS server?


Yes, DANTE SOCKS server is exactly where I first read about the 
limitation that I'm talking about.  Subsequent reading of other SOCKS 
servers supported this limitation.


N.B. I'm specifically talking about how a SOCKS aware (FTP) client can 
ask that an external port be connected to the SOCKS client for a defined 
period of time (ten minutes in the examples I saw).  This is sufficient 
for most active FTP connections (presuming that the ftp client is also 
the socks client) as the data connection from the FTP server comes back 
to the SOCKS server ~> FTP client in short order.


I guess this applies for firewalls that will disable connections to the 
port later.  But the same applies for PASV connections and the reply 
when firewall at serer side is used.


Agreed.

Aside:  I don't think I've ever seen SOCKS be used to front public 
services.  Rather I've only ever seen SOCKS used for (private) clients.


When ssl/tls is used between client and server, intermediate gateways 
and firewalls don't know what ports do endpoints agree on using PORT/PASV.


Unless they intercept SSL conneciton (which kind of makes them FTP 
endpoints) or the client supports and issues FTP command "CCC" which is 
designed for this case.  I'm afraid not many FTP clients do that.


Agreed.

I think this middle box behavior is far more common on HTTPS in larger 
data centers where the middle box is used to enforce compliance and the 
likes.



agree.

the workaround is to use static list of ports at server side and 
configure server firewall to statically allow connection to these ports 
(optionally NAT them).


Yep.


however this is already not a SQUID issue.


Agreed.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 1:01 PM, Matus UHLAR - fantomas wrote:

sorry, this one is from 7230, section 2.3


Thank you for the reference.

If we don't use "data" and "network" in addition to "transparent", 
result is ambiguous.  "intercepting proxy" is not.


Agreed.

It seems as if "transparent" in the context of proxies is as ambiguous 
as "secure" in the context of VPNs.


The former can be "data transparent" and / or "network transparent". 
The latter can be "privacy secure" and / or "integrity secure".  }:-)




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 1:09 PM, Grant Taylor wrote:
It seems as if "transparent" in the context of proxies is as ambiguous 
as "secure" in the context of VPNs.


The former can be "data transparent" and / or "network transparent". The 
latter can be "privacy secure" and / or "integrity secure".  }:-)


Oy vey.

For completeness -- I've continued reading -- RFC 1919: Classical verses 
Transparent IP Proxies § 4 -- Transparent application proxies -- ¶ 3 
starts with:


"A transparent application proxy is often described as a system that 
appears like a packet filter to clients, and like a classical proxy to 
servers."


So as I read it, RFC 1919 § 4 ¶ 3 supports "network transparency".

Then it continues with:

"Apart from this important concept, transparent and classical proxies 
can do similar access control checks and can offer an equivalent level 
of security/robustness/performance, at least as far as the proxy itself 
is concerned."


Which reads as if /network/ transparent proxies can be /data/ 
non-transparent.


Nomenclature and consistent definitions can be hard and can easily 
sideline discussions.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-25 Thread Grant Taylor

On 10/25/22 2:43 AM, Matus UHLAR - fantomas wrote:

These are the FTP protocol "hacks" I mentioned before.


FYI RFC 1919: Classical verses Transparent IP Proxies § 4.1 -- 
Transparent proxy connection example -- describes the operation of an 
intercepting / (network) transparent FTP proxy that does not require any 
FTP protocol hacks.  }:-)




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-10-26 Thread Grant Taylor

On 10/26/22 10:43 AM, mingheng wang wrote:

Hello all,


Hi,

   Since ssl_bump can generate self signed certificates on the fly, I 
wonder if this setup is possible, or even just in theory:
clients with necessary root CA installed connect to a local Squid. With 
ssl_bump and self signed certs,


I'm with you so far.  I've got such a Monkey in the Middle here at the 
house.



it always talks with the clients over HTTPS,


Please clarify / confirm if you're talking about HTTPS protection of the 
client to squid connection.  --  I ask because not all clients natively 
/ easily support HTTPS connection to Squid.


N.B. the connection between the client and Squid is completely 
independent of the connection between Squid and the next upstream server.



making clients believe their connections are secure;


This is the biggest hang up for me.  --  I don't think that the HTTPS 
communications with Squid in and of itself will cause clients to think 
that an insecure site is actually secure.


My client doesn't show that it has a secure connection to neverssl.com 
which doesn't support HTTPS (by design) despite communicating with Squid 
via HTTPS.


the local Squid then forwards the connections to a parent Squid server, 
which however, will only send data back in plain HTTP, i.e. in clear 
text, akin to a reverse proxy with ssl termination to its proxied site.


Okay.  I'm not sure why you would not have encryption on the downstream 
child Squid to the upstream parent Squid, but that's your choice.


   my goals are to cache data/modify requests even when connecting to 
https only sites,


Squid's TLS Monkey in the Middle should cache things without any 
problem.  So I don't see the need to do anything extra for this.


while avoiding using self signed certs to encrypt connections over the 
Internet,


I have no idea where the downstream child Squid is that's doing TLS 
MitM.  Nor do I have any idea where the upstream parent Squid is.  So I 
can't really comment about locality / Internet.


because this way, I can chain an https proxy with trusted certs 
in between.


"Trusted certs" is sort of ambiguous in this case as your TLS MitM 
/clients/ *trust* the root cert that the downstream child Squid is using.


I see no reason why you can't use similar methodology to protect the 
communications between the downstream child Squid to the upstream parent 
Squid.  --  Independent of who the cert used by the upstream parent 
Squid is from.


If the downstream child Squid has the root CA that signed the upstream 
parent Squid's TLS certificate in the downstream child Squid root CA 
store, then the connection between the two Squids is trusted.  Even if 
there are no public CAs involved.  }:-)




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL based DNS server list

2022-10-30 Thread Grant Taylor

On 10/25/22 7:27 PM, Sneaker Space LTD wrote:

Hello,


Hi,

Is there a way to use specific DNS servers based on the user or 
connecting IP address that is making the connection by using acls or any 
other method? If so, can someone send an example.


"Any other method" covers a LOT of things.  Including things outside of 
Squid's domain.


You could probably do some things with networking such that different 
clients connected to different instances of Squid each configured to use 
different DNS servers.  --  This is a huge hole in the ground and can 
cover a LOT of things.  All of which are outside of Squid's domain.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-10-31 Thread Grant Taylor

On 10/30/22 6:59 AM, squ...@treenet.co.nz wrote:

Duane W. would be the best one to ask about the details.

What I know is that some 10-12 years ago I discovered an message by 
Duane mentioning that W3C had (given or accepted) port 3128 for Squid 
use. I've checked the squid-cache archives and not seeing the message.


Right now it looks like the W3C changed their systems and only track the 
standards documents. So I cannot reference their (outdated?) protocol 
registry :-{ . Also checked the squid-cache archives and not finding it 
email history. Sorry.


Did you by chance mean IANA?

I looked and 3128 is registered to something other than Squid.

Nor did their search bring anything up for Squid.

I mean "authority" as used by HTTP specification, which refers to 
https://www.rfc-editor.org/rfc/rfc3986#section-3.2


Yes exactly. That is the source of the problem, perpetuated by the need 
to retain on-wire byte/octet backward compatibility until HTTP/2 changed 
to binary format.


Consider what the proxy has to do when (not if) the IP:port being 
connected to are that proxy's (eg localhost:80) and the URL is only a 
path ("/") on an origin server somewhere else. Does the "GET / HTTP/1.0" 
mean "http://example.com/"; or "http://example.net/"; ?


I would hope that it would return an error page, much like Squid does 
when it can't resolve a domain name or the connection times out.


The key point is that the proxy host:port and the origin host:port are 
two different authority and only the origin may be passed along in the 
URL (or URL+Host header).


Agreed.

When the client uses port 80 and 443 thinking 
they are origin services it is *required* (per 
https://www.rfc-editor.org/rfc/rfc9112.html#name-origin-form) to omit 
the real origins info. Enter problems.


Why would a client (worth it's disk space) ever conflate the value of 
it's configured proxy as the origin server?


I can see a potential for confusion when using (network) transparent / 
intercepting proxies.


I refer to all the many ways the clients may be explicitly or implicitly 
configured to be aware that it is talking to a proxy - such that it 
explicitly avoids sending the problematic origin-form URLs.


ACK

The defaults though are tuned for origin server (or reverse-proxy) 
direct contact.


I don't see how that precludes their use for (forward) proxy servers.

No Browser I know supports 
"http-alt://proxy.example.com?http://origin.example.net/index.html"; URLs.


But I bet that many browsers would support:

   http://proxy.example.com:8080/?http://origin.example.net/index.html

Also, I'm talking about "http://"; and "https://"; using their default 
ports of 80 & 443.


... "at your own risk" they technically might be. So long as you only 
receive one of the three types of syntax there - port 80/443 being 
officially registered for origin / reverse-proxy syntax.


I've been using them without any known problem for multiple years across 
multiple platforms, clients, and versions thereof.  So I'll keep using 
it at my own risk.


It is based on experience. Squid used to be a lot more lenient and tried 
for decades to do the syntax auto-detection. The path from that to 
separate ports is littered with CVEs. Most notably the curse that keeps 
on giving: CVE-2009-0801, which is just the trigger issue for a whole 
nest of bad side effects.


I wonder how much of that problematic history was related to HTTP/0.9 vs 
HTTP/1.0 vs HTTP/1.1 clients.


I similarly wonder how much HTTP/1.0, or even HTTP/0.9, protocol is used 
these days.


Also, there is the elephant in the room of we're talking about a proxy 
server which is frequently, but not always, on a dedicated system or IP. 
 As such, I have no problem predicating the use of the HTTP(80) and 
HTTPS(443) ports when there is no possible chance of confusion between 
forward proxy roles and origin server / reverse proxy roles.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-11-01 Thread Grant Taylor

On 10/31/22 7:32 PM, mingheng wang wrote:

Sorry about that, don't know why it only went to you.


Things happen.  That's why I let people know, in case unwanted things 
did happen.


I delved into the configuration the last few days, and found that 
Squid doesn't officially support cache_peer when ssl_bump is in 
use.


That surprises me.  I wonder if it's a technical limitation or an oversight.

Actually, I can't find a single tool in the market that can 
just encrypt any HTTP connection, "converting" it to an HTTPS 
connection. I'm reading RFCs and documentation to write my own proxy.


That really surprises me.

It's not a general proxy, but this seems like something that stunnel 
will do. (Either direction HTTPS <-> HTTP and HTTP <-> HTTPS.)


This is what still confuses me. A reverse proxy is supposed to proxy 
a web site. At least that's what I learnt from Nginx and Haproxy's 
documentation.  I'll read more on this when I have time.


I think of forward and reverse proxies as doing quite similar things 
with the primary difference being where in the path they are and how 
many sites will be accessed.


Forward:  (C)---(P)---(Big Bad Internet)-(S)
Reverse:  (C)-(Big Bad Internet)---(P)---(S)

Both take requests from clients and pass them to (what the proxy thinks 
is) the server.


But with the forward proxy interfacing between relatively few clients 
and significantly more servers.


Conversely the reverse proxy interfaces with significantly more clients 
and relatively few servers.


The reverse proxy tends to be explicitly configured where servers are 
while the forward proxy relies on standard name resolution to find them, 
usually DNS.


So, on one level, what the forward and reverse proxy do is similar, but 
how they do it is subtly different.


Then there's this:

   Both:  (C)---(P)---(Big Bad Internet)---(P)---(S)

Where in both a client side forward proxy /and/ a server side reverse 
proxy are in use.  }:-)  This really is just both technologies being 
independently used at each end.


Very tough network environment. They can even somehow detect a 
confidential file going through the gateway, even with TLS.


I'm not going to ask questions.



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-11-01 Thread Grant Taylor

On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote:

That is not true as a blanket statement.


Please clarify which statement / who you are addressing.

It seems as if you're addressing mingheng (copied below for convenience):

On 10/31/22 7:32 PM, mingheng wang wrote:
I delved into the configuration the last few days, and found that 
Squid doesn't officially support cache_peer when ssl_bump is in use.


But you may be addressing my statement (...):

On 11/1/22 10:44 AM, Grant Taylor wrote:
That surprises me.  I wonder if it's a technical limitation or an 
oversight.



On 11/1/22 11:33 AM, squ...@treenet.co.nz wrote:
What Squid officially *does not* support is decrypting traffic then 
sending the un-encrypted form to a HTTP-only cache_peer.


Please elaborate.  I'm trying to develop a mental model of what is and 
is not supported with regard to client / proxy / server communications. 
I'm unclear on how this applies to the two potential HTTPS streams; 
client-to-proxy and proxy-to-server.  Or if this is more applicable to 
TLS-Bump on implicit / network transparent / intercepting proxies where 
the client thinks that it's talking HTTPS to the origin server and the 
proxy would really be downgrading security by stripping TLS.


Here is my mental model based on my current understanding.  Is the 
following diagram accurate?


+-+---+
|  P2S-HTTP   | P2S-HTTPS |
+---+-+---+
| C2P-HTTP  |  supported  | supported |
+---+-+---+
| C2P-HTTPS | unsupported | supported |
+---+-+---+
  C2P = Client to Proxy communication
  P2S = Proxy to server communication

All other permutations of inbound TCP/TLS, http:// or https:// URL, and 
outbound TCP/TLS should currently work to some degree. The more recent 
your Squid version the better it is.


ACK



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-11-01 Thread Grant Taylor

On 11/1/22 1:24 PM, squ...@treenet.co.nz wrote:

No I meant W3C. Back in the before times things were a bit messy.


Hum.  I have more questions than answers.  I'm not aware of W3C ever 
assigning ports.  I thought it was /always/ IANA.


Indeed, thus we cannot register it with IEFT/IANA now. The IANA http-alt 
port would probably be best if we did go official.


ACK

You see my point I hope. A gateway proxy that returns an error to 
*every* request is not very good.


Except it's not "/ever/ /request/"  It's "/every/ /request/ /of/ /a/ 
/specific/ /type/" where type is an HTTP version.


What does CloudFlare or any of the other big proxy services or even 
other proxy applications do if you send them an HTTP/1.0 or even 
HTTP/0.9 request without the associated Host: header?



There is no "configured proxy" for this use-case.

Those are the two most/extremely common instances of the problematic 
use-cases. All implicit use of proxy (or gateway) have the same issue.


How common is the (network) transparent / intercepting / implicit use of 
Squid (or any proxy for that matter)?


All of the installs that I've worked on (both as a user and as an 
administrator) have been explicit / non-transparent.


I think you are getting stuck with the subtle difference between "use 
for case X" and "use by default".


ANY port number can be used for *some* use-case(s).


Sure.


"by default" has to work for *all* use-cases.


I disagree.

Note that you are now having to add a non-default port "8080" and path 
"/" to the URL to make it valid/accepted by the Browser.


You were already specifying the non-default-http port via the 
"http-alt://" scheme in your example.


Clients speaking HTTP origin-form (the http:// scheme) are not permitted 
to request tunnels or equivalent gateway services. They can only ask for 
resource representations.


I question the veracity of that.  Mostly around said client's use of an 
explicit proxy.



Port is just a number, it can be anything *IF* it is made explicit.
The scheme determines what protocol syntax is being spoken and thus what 
restrictions and/or requirements are.


... and so the protocol for talking to a webcache service is http-alt://.
Whose default port is not 80 nor 443 for all the same reasons why Squid 
default listening port is 3128.


If we wanted to we could easily switch Squid default port to 
http-alt/8080 without causing technical issues. But it would be annoying 
to update all the existing documentation around the Internet, so not 
worth the effort changing now.


Ditto. Though the legacy install base has a long long long tail. 26 
years after HTTP/1.0 came out and HTTP/0.9 still has use-cases alive.


Where is HTTP/0.9 still being used?

Decreasing, but still a potentially significant amount of traffic seen 
by Squid in general.


Can you, or anyone else, quantify what "a potentially significant amount 
of traffic" is?


Do these cases *really* /need/ to be covered by the /default/ 
configuration?  Or can they be addressed by a variation from the default 
configuration?


Ah, if you have been treating it like an irrelevant elephant that is 
your confusion. The "but not always" is a critical detail in the puzzle 
- its side-effects are the answer to your initial question of *why* 
Squid defaults to X instead of 80/443.


I have no problems using non-default for the "but not always" 
configurations.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FW: Encrypted browser-Squid connection errors

2022-11-01 Thread Grant Taylor

On 11/1/22 6:27 PM, squ...@treenet.co.nz wrote:
No, you cropped my use-case description. It specified a client which was 
*unaware* that it was talking to a forward-proxy.


Sorry, that was unintentional.

Such a client will send requests that only a reverse-proxy or origin 
server can handle properly - because they have explicit special 
configuration to do so.


ACK

In all proxying cases there is special configuration somewhere. For 
forward-proxy it is in the client (or its OS so-called "default"), for 
reverse-proxy it is in the proxy, for interception-proxy it is in both 
the network and the proxy.


ACK

The working ones deliver an HTTP/1.1 302 redirect to their companies 
homepage if the request came from outside the company LAN. If the 
request came from an administrators machine it may respond with stats 
data about the node being probed.


I suspect that Squid et al. could do similar.  ;-)

Almost all the installs I have worked on had interception as part of 
their configuration.


Fair enough.

It is officially recommended to include interception as a backup 
to explicit forward-proxy for networks needing full traffic control 
and/or monitoring.


I've taken things one step further.  I forego the interception and 
simply have the firewall / router hard block traffic not from the proxy 
server.  }:-)


But short of that, I see and acknowledge the value of interception.

I take it from your statement you have not worked on networks like 
web-cafes, airports, schools, hospitals, public shopping malls who all 
use captive portal systems, or high-security institutions capturing 
traffic for personnel activity audits.


I have worked in schools, and other public places, some of which had a 
captive portal that intercepted to a web server to process registration 
or flat blocked non-proxied traffic.  The proxy server in those cases 
was explicit.


There are also at least a half dozen nation states with national 
firewalls doing traffic monitoring and censorship. At least 3 of the 
ones I know of use Squid's for the HTTP portion.


I'm aware of a small number of such nation states.  I assume that there 
are many more.  I was not aware that Squid played in that arena.


ACK. That is you. I am coming at this from the maintainer viewpoint 
where the entire community's needs have to be balanced.


I maintain that the /default/ does not have to work for /all/ use cases.

I agree that the /default/ should work for /most/ use cases.

The current default doesn't work on servers using NLD Active API Server. 
 Ergo the current default doesn't work on /all/ use cases.  }:-)


And you were specifying the non-default-'http-alt' port via the 
"http://"; scheme in yours.
Either way these are two different HTTP syntax with different "default 
port" values.



An agent supporting the http:// URL treats it as a request for some 
resource at the HTTP origin server indicated by the URL authority part 
or Host header.


An agent supporting the http-alt:// URL treats it as a request to 
forward-proxy the request-target specified in the URL query segment, 
using the upstream proxy indicated by the URL authority part or Host 
header.


If I'm understanding correctly, this is a case of someone asking Bob to 
connect to Bob.  That's not a thing.  Just talk directly to Bob.



The ones I am aware of are:
  * HTTP software testing and development
  * IoT sensor polling
  * printer network bootstrapping
  * manufacturing controller management
  * network stability monitoring systems


Why is anything developed in the last two decades green fielding with 
HTTP/0.9?!?!?!


I doubt anyone can quantify it accurately. But worldwide use of HTTP/1.1 
is also dropping, and at a faster rate than 0.9/1.0 right now as the 
more efficient HTTP/2+ expand.


Sure.  There should be three categories of migrations:

HTTP/0.9 to something
HTTP/1.0 to something
HTTP/1.1 to HTTP/2

I sincerely hope that the somethings are going to HTTP/1.1 or HTTP/2.

HTTP/1.1 specification requires semantic compatibility. So long as 1.1 
is still a thing the older versions are likely to remain as well. 
Undesirable as that may be.


ACK



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL based DNS server list

2022-11-02 Thread Grant Taylor

On 11/2/22 4:03 AM, David Touzeau wrote:

It should be a good feature request that the Squid DNS client supports eDNS
eDNS can be used to send the source client IP address received by Squid 
to a remote DNS.


Does Squid even have it's own DNS "" / lookup mechanism?

I naively assumed that Squid simply used the system's name resolution 
capabilities, be that DNS, /etc/hosts, or even NIS(+).


As such, I would be shocked if there is any plumbing to pass additional 
information; e.g. down stream proxy client, to influence how the name 
resolution happens.


Maybe I'm wrong.  Hopefully I'll learn something from how others respond.

In this case the DNS will be able to change its behavior depending on 
the source IP address.


I take that to mean that DNS will change it's behavior based on the 
EDNS0 Client Subnet information.  Because DNS will still see Squid as 
the client of the DNS query.


Aside:  There's a chance that the -- as I understand it -- suggested /24 
aggregation of E.C.S. will not be granular enough to provide the OP's 
desired result.


N.B. the E.C.S. interactions that I've had have used /24 or larger 
subnets to protect client identity.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-11-02 Thread Grant Taylor

On 11/1/22 4:17 PM, squ...@treenet.co.nz wrote:

Yes I was addressing mingheng's statement.


Thank you for clarifying.

The first thing you need to do is avoid that "HTTPS" term. It has 
multiple meanings and they cause confusion. Instead decompose it into 
its TLS and HTTP layers.


Largely okay.

However, a minor nitpick:  TCP, TLS, and HTTP are three distinct things.

TCP is the traditional transport.
TLS is the optional presentation layer that rides on top of TCP.
HTTP is the application layer protocol that's spoken between endpoints 
which rides on top of TLS if present or TCP if TLS is not present.


N.B. I'm eliding UDP / QUIC.


* A client can use TCP or TLS to connect to a proxy.
  - this is configured with http_port vs https_port

* Independently of the connection type the client can request http:// or 
https:// URLs or CONNECT tunnels.


Do you have any recommendation of clarifying / consistent terms for 
using to describe the connection between the client and the proxy with 
the goal of differentiating it from the connection between the proxy and 
the server?


I'll argue, but be open to arguments to the contrary, that both 
connections are using the HTTP application layer protocol on top of 
whatever transport is being used; TCP or TCP+TLS.


* Independent of what the client is doing/requesting, a cache_peer may 
be connected to using TCP or TLS.

  - this is configured with cache_peer tls options (or their absence)

* Independent of anything else, a cache_peer MAY be asked to open a 
CONNECT tunnel for opaque uses.

  - this is automatically decided by Squid based on various criteria.


Oy vey!

I had forgotten about using HTTP's CONNECT to carry non-HTTP traffic.

TCP is the foundation layer. On top of that can be HTTP transfer or TLS 
transfer. Transfer layers can be nested infinitely deep in any order.


I'm avoiding -- what I've seen referenced as -- "chaining" for this 
discussion.


I'm focusing on the what traditional web browsers / clients support out 
of the box; client-to-proxy and proxy-to-server.


After all, even when chaining is in scope, the chained / down stream 
proxy is really functioning as the server that the first / upstream 
proxy connects to.  Thus it's really higher layer traffic as far as the 
first / upstream proxy is concerned.



So "HTTPS" can mean any one of things like:
  1) HTTP-over-TLS (how Browsers handle https:// URLs)
  2) HTTP-over-TLS (sending http:// URLs over a secure connection)
  3) HTTP-over-TLS-over-TLS (relay (1) through a secure cache_peer)
  4) HTTP-over-TLS-over-HTTP (relay (1), (2) or (3) through an insecure 
cache_peer via CONNECT tunnel)


Hence my question about nomenclature.

...really big snip...

Vaguely yes. There are three dimensions to the matrix, you only have two 
shown here.


Please elaborate.  I'm not following what the 3rd dimension would be 
with the small amount of coffee that I've had.



The box showing "unsupported" has "supported" in its other dimension.


I'll wait for your elaboration and to finish my coffee before trying to 
understand that.  Also, $WORK beckons.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid support client ssl termination?

2022-11-04 Thread Grant Taylor

On 11/4/22 7:05 AM, Amos Jeffries wrote:
Aye, that is the terminology definitions of them. Which does not clearly 
convey the recursive layer/nesting properties. They way I suggested to 
think of TLS and HTTP as transfer layers helps clarify that property.


I will concede "differentiate", but I don't agree that it "clarifies".

The best I have found is the rule-of-thumb to avoid abbreviations that 
have multiple meanings and use simple nouns that the reader understands 
already to build a compound noun that they can comprehend. As such you 
will find my wording varies between discussions, and can adjust as I 
learn what terms the others understand already.


Okay

How would you describe the following in a green field discussion to 
someone without any prior context to suggest nomenclature?


Client uses an explicit proxy connection with TLS encryption to ask the 
proxy to request a TLS encrypted web page on the client's behalf.


To Squid the transport is (almost) always TCP. Whether TLS is treated as 
transport layer or application layer depends on the Squid features (eg 
SSL-Bump).


Eh  This isn't a photon, it's can't be both a wave and a particle. 
I feel like it's really one thing and what Squid does with it may be 
different based on if Squid is SSL-Bumping or not.  After all, it's both 
are exactly the same thing to the client, independent of if Squid is 
SSL-Bumping or not.


So for your purpose of understanding the possibilities it is best to 
think of it as just another transfer protocol that Squid can receive 
like HTTP. Which can either transfer opaque client information, or 
another type of transfer protocols "nested" inside it.


Hum.  So if I understand you correctly, this could be HTTP application 
layer protocol on top of an unencrypted TCP transport /or/ on top of an 
encrypted TCP+TLS transport?


Oh, I see (I think). I use nesting or layering (from OSI model 
terminology) because "chain" is used by HTTP in the definition of how 
traffic is routed between multiple agents. For example; 
client->squid->server is a chain.


I don't consider client -> squid -> origin to be a chain of proxies.

I do consider client -> squid -> $SOME_OTHER_PROXY -> origin to be a 
chain of proxies.


To me for it to be a chain of proxies, there must be multiple proxies 
involved.


N.B. maybe this is somewhat a problem of nomenclature.  Hence why I have 
explicitly typed out "chain /of/ /proxies/" here.


By the very nature of how proxies work, even for the simplest method of 
an unencrypted TCP transport from client to proxy and then an 
unencrypted TCP transport from the proxy to the origin server, there are 
three parties involved; client, proxy, and origin server.


What's more is that this three party system is baked into many 
contemporary clients.  Conversely, almost everything needs an extremely 
special configuration to add, or chain, an additional intermediate proxy 
in the middle.  Hence why I think that "proxy chaining" is very special. 
 --  After I type that, the nomenclature "/proxy/ chaining" even 
supports that there are multiple proxies.


N.B. "origin server" may be a misnomer as from the client's and Squid's 
point of view, it may not be an origin server and may in fact be an 
additional layer of reverse proxying unbeknownst to the client nor Squid.



Browsers are origin-client software. They deal with these layers:
  * HTTP (http:// to origin, or http:// to traditional plain-text 
forward-proxy), or


I believe that's really two different things in an explicitly configured 
proxy use case, because what the client will do is subtly, but 
distinctly different and that difference is important.



  * HTTP-over-TLS (https:// to origin), or
  * HTTP-over-TLS-over-HTTP (traditional https:// to plain-text 
forward-proxy).


Recently some started handling HTTP-over-TLS-over-HTTP-over-TLS - which 
is traditional https:// to an secure/encrypted forward-proxy.


Maybe it's just me, but I don't know that I could extract what is 
happening, without a lot of thought, from these descriptions.


 - HTTP-over-TCP
 - HTTP-over-TLS
 - HTTP-over-TCP-over-HTTP-over-TCP
 - HTTP-over-TLS-over-HTTP-over-TCP
 - HTTP-over-TCP-over-HTTP-over-TLS
 - HTTP-over-TLS-over-HTTP-over-TLS

I don't see a good clean / uniform cut / divide for determining what is 
what, client to origin, client to proxy, or proxy to origin.


There you are running into the ambiguity of "chain". Using both its 
meanings in one sentence.


I don't think so.  See above.

I'm fairly certain of what I think to be a proxy chain.  It seems as if 
you are also fairly certain of what you think to be a proxy chain.  But 
it seems like we may be using different definitions of what is a proxy 
chain.



The three dimensions in play are:
  1) protocol X being spoken between client and Squid
  2) protocol Y the client is requesting to use with the origin server
  3) protocol Z actually being spoken between Squid and next-hop 
peer/server


Ah.  Y

Re: [squid-users] Squid web isolation

2022-11-14 Thread Grant Taylor

On 11/14/22 10:08 AM, Alex Rousskov wrote:
AFAICT, "Web Isolation" requires rewriting HTTP responses. Yes, Squid 
can use an ICAP/eCAP content adaptation service to rewrite HTTP 
responses.


I feel like just saying Web Isolation rewrites HTTP responses is about 
like saying you're going to experience moisture when standing in front 
of a tidal wave.  Is it true?  Yes.  Does it convey scope?  Not even 
remotely.


Aside:  I think the fact that Web Isolation uses JavaScript is ironic.

However, you would need to find or create a service that implements 
the guts of what Symantec calls "Web Isolation". I doubt you will 
find similar open source services.


Ya  It seems as if Web Isolation does a full render of the requested 
page in a sandbox / custom web browser hostsed on the Web Isolation 
infrastructure  and sends a responsive representation thereof to clients 
for use / interaction with.  This is all done in the context of an HTTP 
reqeust (over HTTP and / or HTTPS?) in a seemingly very transparent way.


This infrastructure to do the rendering and recomposition to generate 
and send the inew faximily to the client is WAY beyond what Squid is 
designed to do.


I agree that this probably could be done through content adaptation. 
But this seems like it is an entire product / industry unto itself.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Use squid to disable outdated security certificate warning?

2023-03-12 Thread Grant Taylor

On 3/10/23 7:19 PM, Peter Hucker wrote:
Somebody mentioned if Boinc accesses the internet through a proxy 
(and I already have it going through squid to cache data) I can get 
the proxy to disable this.  Is this possible and how?


As Amos said, it depends.

I would assume that you could use something like Squid's TLS 
intercepting capability to present current certificates from a locally 
trusted root CA to the Boinc client.


I think the biggest hurtle will be getting Squid to accept expired 
certificates from upstream servers and / or expired root certificates 
needed by upstream servers.  Maybe there are some knobs that can be 
twiddled to allow this.


There might be other ways to address this.  This starts to get into 
black hat TLS busting methodology, but for what seems to be a white hat 
reason.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting ping to work via proxy

2023-07-02 Thread Grant Taylor
Pre-script:  The following is in response to one specific statement from 
Antony and not really Squid related.


On 7/1/23 5:08 PM, Antony Stone wrote:

There is no such thing as an ICMP proxy.


I'm not aware of an ICMP proxy.  But my ignorance of one doesn't 
preclude one (or more) from existing.


Also, Building Internet Firewalls from O'Reilly, both 1st and 2nd 
edition, make reference to "SOCKS wrappers for ping and traceroute.


Section 9.5.3 SOCKS Components

--8<--
The SOCKS package includes the following components:

 - The SOCKS server. This server must run on a Unix system, although it 
has been ported to many different variants of Unix.


 - The SOCKS client library for Unix machines.

 - SOCKS-ified versions of several standard Unix client programs sucn 
as FTP and Telnet.


 - SOCKS wrappers for ping and traceroute.

 - The runsocks program to SOCKS-ify dynamically linked programs at 
runtime without recompiling.


In addition, client libraries for Macintosh and Windows systems are 
available as separate packages.


Figure 9-4. Using SOCKS for proxying.
-->8--

So there seems to have been a way to use ping and traceroute via SOCKS 
proxy once upon a time.  It may have been lost to the sands of time.


I suspect that the book is talking about the SOCKS server from NEC, 
something that I've not been able to get my hands on yet.


It may be talking about something from Trusted Internet Solutions' 
(a.k.a. TIS's) Firewall Toolkit (a.k.a. fwtk).  I've not yet messed with 
the old copies of TIS FWTK that I have.


Seeing as how this is SOCKS related and me not being aware of Squid 
supporting SOCKS, it's still very much a "no, Squid doesn't support ICMP 
proxying".  At least not without doing some things that encapsulate ICMP 
traffic in some sort of tunnel that happens to flow through Squid.  But 
even that is HTTP(S) traffic as far as Squid is concerned.


Maybe websocket has something that it can do, but I'm not up on that.





Grant. . . .
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Making squid into socks proxy

2023-07-10 Thread Grant Taylor

On 7/10/23 2:36 PM, Francesco Chemolli wrote:

Hi Robert,


Hi Francesco,

in my understanding that configuration turns Squid into a Socks 
client. Outbound squid connections will then be proxied through a socks 
proxy.


According to "this  page" [1] linked from the "How to enable SOCKS5 for 
Squid Proxy?" page [2] that Rob linked to, this seems like it is both 
client /and/ server support.


--8<--
Details:

Squid handles many HTTP related protocols. But presently is unable to 
natively accept or send HTTP connections over SOCKS.


The aim of this project will be to make http_port accept SOCKS 
connections and make outgoing connections to SOCKS cache_peers so that 
Squid can send requests easily through to SOCKS gateways or act as an 
HTTP SOCKS gateway itself.

-->8--

Particularly germane are "accept ... HTTP connections over SOCKS", "make 
http_port accept SOCKS connections", and "act as an HTTP SOCKS gateway 
itself".



There might be a point, in some use cases, but I agree it's a stretch.
For instance it could help if there's a need to log URLs being accessed, 
which socks by itself can' tdo; it might also work as a TLS interception 
proxy which then needs to access some tightly-controlled network segment.


I can see a modicum of value in making Squid be a SOCKS client -- via 
something more graceful than an LD_PRELOAD.  I suspect that I'd turn to 
a different SOCKS daemon before I'd turn to Squid as such.  Dante comes 
to mind.


I can see a LOT of value in SOCKS proxy support.  Especially 
authenticated access to the SOCKS proxy.  Even more so if you 
communicate with the proxy over encrypted channels


In some ways a SOCKS proxy acts sort of like a remote TCP/IP client / 
stack.  Combine this functionality with authentication and I can easily 
allow controlled remote access into a segregated network and know which 
authenticated user did what.  Much like a VPN, but at the application level.


At a former job we used a lot of SOCKS servers.  We would have a pair 
(for redundancy) of SOCKS servers co-located at client networks and 
required authentication of our employees to be able to utilize said 
SOCKS servers.  This configuration made it relatively trivial to our 
employees change which SOCKS server they were using to switch between 
the client they were doing administration work for.


One of the biggest advantages that I saw with SOCKS was the VPN like 
connectivity, which required authentication, and the ability to filter 
each packet / connection completely independently of the underlying 
transport between the employee and the SOCKS server(s).


What's more is that the SOCKS servers appeared as if they were on the 
client's network.  Thus the, and the remote employees using them, could 
access things on the client's network without any routing changes to the 
client's network.


Think of it sort of like a remote extension of a network in a manner 
that's completely divorced from the underlying network topology / 
transport between clients and the SOCKS server.


This complete separation also means that things that aren't expressly 
configured to use the SOCKS server can't accidentally be routed through 
the SOCKS server.  So when there is a security incident on the network, 
it will be much better isolated to one side of the SOCKS server than if 
it were a more traditional routed VPN type connection.


I can definitely see value in enabling Squid to be a SOCKS client to be 
able to utilize / benefit from the perks that SOCKS servers can offer.


I see less value in having Squid be a SOCKS server.

That being said, I wonder how closely related SOCKS server functionality 
and WebSocket support provide.  Perhaps SOCKS can be another side door 
into code needed to support WebSocket.


[1] Feature: SOCKS Support - http://wiki.squid-cache.org/Features/Socks

[2] How to enable SOCKS5 for Squid proxy? - 
https://serverfault.com/questions/820578/how-to-enable-socks5-for-squid-proxy#820612




Grant. . . .
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Does Squid-cache support SOCKS5 protocol?

2023-09-11 Thread Grant Taylor

On 9/11/23 4:23 AM, Jason Long wrote:

Does the Squid-cache team have any plans to add this feature?


Is there a particular reason that you want to see Squid add support as a 
SOCKS server verses using a different existing SOCKS server?  E.g. Dante 
SOCKS server?


Dante is quite capable and can do a LOT of things.



--
Grant. . . .
unix || die

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Recommended squid settings when using IPS-based domain blocking

2024-03-06 Thread Grant Taylor

On 3/6/24 08:48, Jason Marshall wrote:
We have been using squid (version squid-5.5-6.el9_3.5) under RHEL9 as a 
simple pass-through proxy without issue for the past month or so. 
Recently our security team implemented an IPS product that intercepts 
domain names known to be associated with malware and ransomware command 
and control. Once this was in place, we started having issues with the 
behavior of squid.


Can you get a feed of the verboten domains from the team and configure 
Squid to block such requests, thereby eliminating the need to do the DNS 
lookup?




--
Grant. . . .


smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid acl + user through ssh

2024-04-18 Thread Grant Taylor

On 4/18/24 2:46 PM, Albert Shih wrote:
So what I'm trying to do is to use ACL according to the user who make 
the ssh connection, I don't want «another» authentication.


About the only thing that comes to mind is RFC 931 (?) ident (might be 
okay on the same system) or something that matches the process owner. 
(I'm thinking iptables process owner match extension.)


But my testing seems to show that such port forwarding is done by the 
ssh daemon owner process not the connecting user.


If it wasn't for your "don't want another authentication" I'd wonder 
about username and password creds to authenticate to Squid.




--
Grant. . . .
unix || die

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users