Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Jonathan Lee
If it’s encrypted at TLS1.3 it should still work with the approved certificate 
authority as it is imported to my devices I own. I just enable TLS1.3 right?

> On Jul 5, 2024, at 11:28,  
>  wrote:
> 
> The only one I got a certificate from was the non iMac
> 
> The iMac keeps sending change cipher requests and wants TLS1.3 over and over 
> as soon as a TLS1.2 pops up it works 
> 
> That one has the certificate however that system the Toshiba does not have 
> any issues with this error. I highly suspect that I need to enable TLS1.3 
> would you agree?
> 
> -Original Message-
> From: Alex Rousskov  
> Sent: Friday, July 5, 2024 11:02 AM
> To: squid-users 
> Cc: Jonathan Lee 
> Subject: Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6
> 
> On 2024-07-05 12:02, Jonathan Lee wrote:
> 
>>> Alex: I recommend determining what that CA is in these cases (e.g., by 
>>> capturing raw TLS packets and matching them with connection information 
>>> from A000417 error messages in cache.log or %err_detail in access.log).
> 
> 
>> I have Wireshark running do I just look for information with 
>> ssl.handshake.type == 1
> 
>> Or is there a wireshark particular filter you would like ran to help with 
>> isolation?
> 
> 
> Please use Wireshark to determine the name of CA that issued the certificate 
> that Squid sent to the client in the failing test case. If you are not sure, 
> feel free to share issuer and subject fields of all certificates that Squid 
> sent to the client in that test case (there may be two of each if Squid sent 
> two certificates). Or even share a pointer to the entire (compressed) raw 
> test case packet capture in pcap format!
> 
> These certificates are a part of standard TLS handshake, and Wireshark 
> usually displays their fields when one studies TLS handshake bytes using 
> Wireshark UI.
> 
> I do not know what filter would work best, but there should be just a handful 
> of TLS handshake packets to examine for the test case, so no filter should be 
> necessary.
> 
> 
> HTH,
> 
> Alex.
> 
> 
> 
>>> On Jul 5, 2024, at 08:23, Jonathan Lee  wrote:
>>> 
>>> Thanks for the email and support with this. I will get wireshark 
>>> running on the client and get the info required. Yes the information 
>>> prior is from the firewall side outside of the proxy testing from the 
>>> demilitarized zone area. I wanted to test this first to rule that out 
>>> as it’s coming in from that first and hits the proxy next Sent from 
>>> my iPhone
>>> 
>>>> On Jul 5, 2024, at 06:33, Alex Rousskov  
>>>> wrote:
>>>> 
>>>> On 2024-07-04 19:12, Jonathan Lee wrote:
>>>>> You also stated .. " my current working theory suggests that we are 
>>>>> looking at a (default) signUntrusted use case.”
>>>>> I noticed for Squid documents that default is now set to off ..
>>>> 
>>>> The http_port option you are looking at now is not the directive I was 
>>>> talking about earlier.
>>>> 
>>>>> http_port
>>>>> tls-default-ca[=off]
>>>>>   Whether to use the system Trusted CAs. Default is OFF.
>>>>> Would enabling this resolve the problem in Squid 6.6 for error.
>>>> 
>>>> 
>>>> No, the above poorly documented http_port option is for validating 
>>>> _client_ certificates. It has been off since Squid v4 AFAICT. Your clients 
>>>> are not sending client certificates to Squid.
>>>> 
>>>> According to the working theory, the problem we are solving is related to 
>>>> server certificates. http_port tls-default-ca option does not affect 
>>>> server certificate validation. Server certificate validation should use 
>>>> default CAs by default.
>>>> 
>>>> Outside of SslBump, server certificate validation is controlled by 
>>>> tls_outgoing_options default-ca option. That option defaults to "on". I am 
>>>> not sure whether SslBump honors that directive/option though. There are 
>>>> known related bugs in that area. However, we are jumping ahead of 
>>>> ourselves. We should confirm the working theory first.
>>>> 
>>>>> The squid.conf.documented lists it incorrectly
>>>> 
>>>> Squid has many directives and a directive may have many options. One 
>>>> should not use an directive option name instead of a directive name. One 
>>>> should not use an option from one direc

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread jonathanlee571
The only one I got a certificate from was the non iMac

The iMac keeps sending change cipher requests and wants TLS1.3 over and over as 
soon as a TLS1.2 pops up it works 

That one has the certificate however that system the Toshiba does not have any 
issues with this error. I highly suspect that I need to enable TLS1.3 would you 
agree?

-Original Message-
From: Alex Rousskov  
Sent: Friday, July 5, 2024 11:02 AM
To: squid-users 
Cc: Jonathan Lee 
Subject: Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

On 2024-07-05 12:02, Jonathan Lee wrote:

> > Alex: I recommend determining what that CA is in these cases (e.g., by 
> > capturing raw TLS packets and matching them with connection information 
> > from A000417 error messages in cache.log or %err_detail in access.log).


> I have Wireshark running do I just look for information with 
> ssl.handshake.type == 1

> Or is there a wireshark particular filter you would like ran to help with 
> isolation?


Please use Wireshark to determine the name of CA that issued the certificate 
that Squid sent to the client in the failing test case. If you are not sure, 
feel free to share issuer and subject fields of all certificates that Squid 
sent to the client in that test case (there may be two of each if Squid sent 
two certificates). Or even share a pointer to the entire (compressed) raw test 
case packet capture in pcap format!

These certificates are a part of standard TLS handshake, and Wireshark usually 
displays their fields when one studies TLS handshake bytes using Wireshark UI.

I do not know what filter would work best, but there should be just a handful 
of TLS handshake packets to examine for the test case, so no filter should be 
necessary.


HTH,

Alex.



>> On Jul 5, 2024, at 08:23, Jonathan Lee  wrote:
>>
>> Thanks for the email and support with this. I will get wireshark 
>> running on the client and get the info required. Yes the information 
>> prior is from the firewall side outside of the proxy testing from the 
>> demilitarized zone area. I wanted to test this first to rule that out 
>> as it’s coming in from that first and hits the proxy next Sent from 
>> my iPhone
>>
>>> On Jul 5, 2024, at 06:33, Alex Rousskov  
>>> wrote:
>>>
>>> On 2024-07-04 19:12, Jonathan Lee wrote:
>>>> You also stated .. " my current working theory suggests that we are 
>>>> looking at a (default) signUntrusted use case.”
>>>> I noticed for Squid documents that default is now set to off ..
>>>
>>> The http_port option you are looking at now is not the directive I was 
>>> talking about earlier.
>>>
>>>> http_port
>>>> tls-default-ca[=off]
>>>>Whether to use the system Trusted CAs. Default is OFF.
>>>> Would enabling this resolve the problem in Squid 6.6 for error.
>>>
>>>
>>> No, the above poorly documented http_port option is for validating _client_ 
>>> certificates. It has been off since Squid v4 AFAICT. Your clients are not 
>>> sending client certificates to Squid.
>>>
>>> According to the working theory, the problem we are solving is related to 
>>> server certificates. http_port tls-default-ca option does not affect server 
>>> certificate validation. Server certificate validation should use default 
>>> CAs by default.
>>>
>>> Outside of SslBump, server certificate validation is controlled by 
>>> tls_outgoing_options default-ca option. That option defaults to "on". I am 
>>> not sure whether SslBump honors that directive/option though. There are 
>>> known related bugs in that area. However, we are jumping ahead of 
>>> ourselves. We should confirm the working theory first.
>>>
>>>> The squid.conf.documented lists it incorrectly
>>>
>>> Squid has many directives and a directive may have many options. One should 
>>> not use an directive option name instead of a directive name. One should 
>>> not use an option from one directive with another directive. Squid naming 
>>> is often inconsistent; be careful.
>>>
>>> * http_port is a directive. tls-default-ca is an option for that directive. 
>>> It is used for client certificate validation. It defaults to "off" (because 
>>> client certificates are rarely signed by well-known (a.k.a. "default") CAs 
>>> preinstalled in many deployment environments).
>>>
>>> * tls_outgoing_options is a directive. default-ca is an option for that 
>>> directive. It is used for server certificate validation outside of SslBump 
>>> contexts (at least!). It defaults to "on" (because server certificates are 
>>> usually signed by well-known (a.k.a. "default") CAs preinstalled in many 
>>> deployment environments).
>>>
>>> AFAICT, the documentation in question is not wrong (but is insufficient).
>>>
>>> Again, I do not recommend changing any Squid configuration 
>>> directives/options at this triage state.
>>>
>>> Alex.
>>>

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-05 12:02, Jonathan Lee wrote:


> Alex: I recommend determining what that CA is in these cases (e.g., by 
capturing raw TLS packets and matching them with connection information from 
A000417 error messages in cache.log or %err_detail in access.log).




I have Wireshark running do I just look for information with ssl.handshake.type 
== 1



Or is there a wireshark particular filter you would like ran to help with 
isolation?



Please use Wireshark to determine the name of CA that issued the 
certificate that Squid sent to the client in the failing test case. If 
you are not sure, feel free to share issuer and subject fields of all 
certificates that Squid sent to the client in that test case (there may 
be two of each if Squid sent two certificates). Or even share a pointer 
to the entire (compressed) raw test case packet capture in pcap format!


These certificates are a part of standard TLS handshake, and Wireshark 
usually displays their fields when one studies TLS handshake bytes using 
Wireshark UI.


I do not know what filter would work best, but there should be just a 
handful of TLS handshake packets to examine for the test case, so no 
filter should be necessary.



HTH,

Alex.




On Jul 5, 2024, at 08:23, Jonathan Lee  wrote:

Thanks for the email and support with this. I will get wireshark running on the 
client and get the info required. Yes the information prior is from the 
firewall side outside of the proxy testing from the demilitarized zone area. I 
wanted to test this first to rule that out as it’s coming in from that first 
and hits the proxy next
Sent from my iPhone


On Jul 5, 2024, at 06:33, Alex Rousskov  
wrote:

On 2024-07-04 19:12, Jonathan Lee wrote:

You also stated .. " my current working theory suggests that we are looking at 
a (default) signUntrusted use case.”
I noticed for Squid documents that default is now set to off ..


The http_port option you are looking at now is not the directive I was talking 
about earlier.


http_port
tls-default-ca[=off]
   Whether to use the system Trusted CAs. Default is OFF.
Would enabling this resolve the problem in Squid 6.6 for error.



No, the above poorly documented http_port option is for validating _client_ 
certificates. It has been off since Squid v4 AFAICT. Your clients are not 
sending client certificates to Squid.

According to the working theory, the problem we are solving is related to 
server certificates. http_port tls-default-ca option does not affect server 
certificate validation. Server certificate validation should use default CAs by 
default.

Outside of SslBump, server certificate validation is controlled by tls_outgoing_options 
default-ca option. That option defaults to "on". I am not sure whether SslBump 
honors that directive/option though. There are known related bugs in that area. However, 
we are jumping ahead of ourselves. We should confirm the working theory first.


The squid.conf.documented lists it incorrectly


Squid has many directives and a directive may have many options. One should not 
use an directive option name instead of a directive name. One should not use an 
option from one directive with another directive. Squid naming is often 
inconsistent; be careful.

* http_port is a directive. tls-default-ca is an option for that directive. It is used for client 
certificate validation. It defaults to "off" (because client certificates are rarely 
signed by well-known (a.k.a. "default") CAs preinstalled in many deployment environments).

* tls_outgoing_options is a directive. default-ca is an option for that directive. It is used for 
server certificate validation outside of SslBump contexts (at least!). It defaults to 
"on" (because server certificates are usually signed by well-known (a.k.a. 
"default") CAs preinstalled in many deployment environments).

AFAICT, the documentation in question is not wrong (but is insufficient).

Again, I do not recommend changing any Squid configuration directives/options 
at this triage state.

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Jonathan Lee
Side note: I have just found while analyzing Wireshark packets that this 
A000417 error only occurs with use of the iMac and the Safari browser, this 
does not occur on Windows 10 with the Edge browser. 

> On Jul 5, 2024, at 09:02, Jonathan Lee  wrote:
> 
> per 
> 
> As the next step in triage, I recommend determining what that CA is in these 
> cases (e.g., by capturing raw TLS packets and matching them with connection 
> information from A000417 error messages in cache.log or %err_detail in 
> access.log).
> 
> 
> I have Wireshark running do I just look for information with 
> ssl.handshake.type == 1
> 
> Or is there a wireshark particular filter you would like ran to help with 
> isolation?
> 
> 
> 
>> On Jul 5, 2024, at 08:23, Jonathan Lee  wrote:
>> 
>> Thanks for the email and support with this. I will get wireshark running on 
>> the client and get the info required. Yes the information prior is from the 
>> firewall side outside of the proxy testing from the demilitarized zone area. 
>> I wanted to test this first to rule that out as it’s coming in from that 
>> first and hits the proxy next
>> Sent from my iPhone
>> 
>>> On Jul 5, 2024, at 06:33, Alex Rousskov  
>>> wrote:
>>> 
>>> On 2024-07-04 19:12, Jonathan Lee wrote:
 You also stated .. " my current working theory suggests that we are 
 looking at a (default) signUntrusted use case.”
 I noticed for Squid documents that default is now set to off ..
>>> 
>>> The http_port option you are looking at now is not the directive I was 
>>> talking about earlier.
>>> 
 http_port
 tls-default-ca[=off]
  Whether to use the system Trusted CAs. Default is OFF.
 Would enabling this resolve the problem in Squid 6.6 for error.
>>> 
>>> 
>>> No, the above poorly documented http_port option is for validating _client_ 
>>> certificates. It has been off since Squid v4 AFAICT. Your clients are not 
>>> sending client certificates to Squid.
>>> 
>>> According to the working theory, the problem we are solving is related to 
>>> server certificates. http_port tls-default-ca option does not affect server 
>>> certificate validation. Server certificate validation should use default 
>>> CAs by default.
>>> 
>>> Outside of SslBump, server certificate validation is controlled by 
>>> tls_outgoing_options default-ca option. That option defaults to "on". I am 
>>> not sure whether SslBump honors that directive/option though. There are 
>>> known related bugs in that area. However, we are jumping ahead of 
>>> ourselves. We should confirm the working theory first.
>>> 
 The squid.conf.documented lists it incorrectly
>>> 
>>> Squid has many directives and a directive may have many options. One should 
>>> not use an directive option name instead of a directive name. One should 
>>> not use an option from one directive with another directive. Squid naming 
>>> is often inconsistent; be careful.
>>> 
>>> * http_port is a directive. tls-default-ca is an option for that directive. 
>>> It is used for client certificate validation. It defaults to "off" (because 
>>> client certificates are rarely signed by well-known (a.k.a. "default") CAs 
>>> preinstalled in many deployment environments).
>>> 
>>> * tls_outgoing_options is a directive. default-ca is an option for that 
>>> directive. It is used for server certificate validation outside of SslBump 
>>> contexts (at least!). It defaults to "on" (because server certificates are 
>>> usually signed by well-known (a.k.a. "default") CAs preinstalled in many 
>>> deployment environments).
>>> 
>>> AFAICT, the documentation in question is not wrong (but is insufficient).
>>> 
>>> Again, I do not recommend changing any Squid configuration 
>>> directives/options at this triage state.
>>> 
>>> Alex.
>>> 
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Jonathan Lee
per 

As the next step in triage, I recommend determining what that CA is in these 
cases (e.g., by capturing raw TLS packets and matching them with connection 
information from A000417 error messages in cache.log or %err_detail in 
access.log).


I have Wireshark running do I just look for information with ssl.handshake.type 
== 1

Or is there a wireshark particular filter you would like ran to help with 
isolation?



> On Jul 5, 2024, at 08:23, Jonathan Lee  wrote:
> 
> Thanks for the email and support with this. I will get wireshark running on 
> the client and get the info required. Yes the information prior is from the 
> firewall side outside of the proxy testing from the demilitarized zone area. 
> I wanted to test this first to rule that out as it’s coming in from that 
> first and hits the proxy next
> Sent from my iPhone
> 
>> On Jul 5, 2024, at 06:33, Alex Rousskov  
>> wrote:
>> 
>> On 2024-07-04 19:12, Jonathan Lee wrote:
>>> You also stated .. " my current working theory suggests that we are looking 
>>> at a (default) signUntrusted use case.”
>>> I noticed for Squid documents that default is now set to off ..
>> 
>> The http_port option you are looking at now is not the directive I was 
>> talking about earlier.
>> 
>>> http_port
>>> tls-default-ca[=off]
>>>   Whether to use the system Trusted CAs. Default is OFF.
>>> Would enabling this resolve the problem in Squid 6.6 for error.
>> 
>> 
>> No, the above poorly documented http_port option is for validating _client_ 
>> certificates. It has been off since Squid v4 AFAICT. Your clients are not 
>> sending client certificates to Squid.
>> 
>> According to the working theory, the problem we are solving is related to 
>> server certificates. http_port tls-default-ca option does not affect server 
>> certificate validation. Server certificate validation should use default CAs 
>> by default.
>> 
>> Outside of SslBump, server certificate validation is controlled by 
>> tls_outgoing_options default-ca option. That option defaults to "on". I am 
>> not sure whether SslBump honors that directive/option though. There are 
>> known related bugs in that area. However, we are jumping ahead of ourselves. 
>> We should confirm the working theory first.
>> 
>>> The squid.conf.documented lists it incorrectly
>> 
>> Squid has many directives and a directive may have many options. One should 
>> not use an directive option name instead of a directive name. One should not 
>> use an option from one directive with another directive. Squid naming is 
>> often inconsistent; be careful.
>> 
>> * http_port is a directive. tls-default-ca is an option for that directive. 
>> It is used for client certificate validation. It defaults to "off" (because 
>> client certificates are rarely signed by well-known (a.k.a. "default") CAs 
>> preinstalled in many deployment environments).
>> 
>> * tls_outgoing_options is a directive. default-ca is an option for that 
>> directive. It is used for server certificate validation outside of SslBump 
>> contexts (at least!). It defaults to "on" (because server certificates are 
>> usually signed by well-known (a.k.a. "default") CAs preinstalled in many 
>> deployment environments).
>> 
>> AFAICT, the documentation in question is not wrong (but is insufficient).
>> 
>> Again, I do not recommend changing any Squid configuration 
>> directives/options at this triage state.
>> 
>> Alex.
>> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Jonathan Lee
Thanks for the email and support with this. I will get wireshark running on the 
client and get the info required. Yes the information prior is from the 
firewall side outside of the proxy testing from the demilitarized zone area. I 
wanted to test this first to rule that out as it’s coming in from that first 
and hits the proxy next
Sent from my iPhone

> On Jul 5, 2024, at 06:33, Alex Rousskov  
> wrote:
> 
> On 2024-07-04 19:12, Jonathan Lee wrote:
>> You also stated .. " my current working theory suggests that we are looking 
>> at a (default) signUntrusted use case.”
>> I noticed for Squid documents that default is now set to off ..
> 
> The http_port option you are looking at now is not the directive I was 
> talking about earlier.
> 
> > http_port
>>   tls-default-ca[=off]
>>Whether to use the system Trusted CAs. Default is OFF.
>> Would enabling this resolve the problem in Squid 6.6 for error.
> 
> 
> No, the above poorly documented http_port option is for validating _client_ 
> certificates. It has been off since Squid v4 AFAICT. Your clients are not 
> sending client certificates to Squid.
> 
> According to the working theory, the problem we are solving is related to 
> server certificates. http_port tls-default-ca option does not affect server 
> certificate validation. Server certificate validation should use default CAs 
> by default.
> 
> Outside of SslBump, server certificate validation is controlled by 
> tls_outgoing_options default-ca option. That option defaults to "on". I am 
> not sure whether SslBump honors that directive/option though. There are known 
> related bugs in that area. However, we are jumping ahead of ourselves. We 
> should confirm the working theory first.
> 
> > The squid.conf.documented lists it incorrectly
> 
> Squid has many directives and a directive may have many options. One should 
> not use an directive option name instead of a directive name. One should not 
> use an option from one directive with another directive. Squid naming is 
> often inconsistent; be careful.
> 
> * http_port is a directive. tls-default-ca is an option for that directive. 
> It is used for client certificate validation. It defaults to "off" (because 
> client certificates are rarely signed by well-known (a.k.a. "default") CAs 
> preinstalled in many deployment environments).
> 
> * tls_outgoing_options is a directive. default-ca is an option for that 
> directive. It is used for server certificate validation outside of SslBump 
> contexts (at least!). It defaults to "on" (because server certificates are 
> usually signed by well-known (a.k.a. "default") CAs preinstalled in many 
> deployment environments).
> 
> AFAICT, the documentation in question is not wrong (but is insufficient).
> 
> Again, I do not recommend changing any Squid configuration directives/options 
> at this triage state.
> 
> Alex.
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-04 19:12, Jonathan Lee wrote:
You also stated .. " my current working theory suggests that we are 
looking at a (default) signUntrusted use case.”


I noticed for Squid documents that default is now set to off ..


The http_port option you are looking at now is not the directive I was 
talking about earlier.


> http_port

   tls-default-ca[=off]
Whether to use the system Trusted CAs. Default is OFF.

Would enabling this resolve the problem in Squid 6.6 for error.



No, the above poorly documented http_port option is for validating 
_client_ certificates. It has been off since Squid v4 AFAICT. Your 
clients are not sending client certificates to Squid.


According to the working theory, the problem we are solving is related 
to server certificates. http_port tls-default-ca option does not affect 
server certificate validation. Server certificate validation should use 
default CAs by default.


Outside of SslBump, server certificate validation is controlled by 
tls_outgoing_options default-ca option. That option defaults to "on". I 
am not sure whether SslBump honors that directive/option though. There 
are known related bugs in that area. However, we are jumping ahead of 
ourselves. We should confirm the working theory first.


> The squid.conf.documented lists it incorrectly

Squid has many directives and a directive may have many options. One 
should not use an directive option name instead of a directive name. One 
should not use an option from one directive with another directive. 
Squid naming is often inconsistent; be careful.


* http_port is a directive. tls-default-ca is an option for that 
directive. It is used for client certificate validation. It defaults to 
"off" (because client certificates are rarely signed by well-known 
(a.k.a. "default") CAs preinstalled in many deployment environments).


* tls_outgoing_options is a directive. default-ca is an option for that 
directive. It is used for server certificate validation outside of 
SslBump contexts (at least!). It defaults to "on" (because server 
certificates are usually signed by well-known (a.k.a. "default") CAs 
preinstalled in many deployment environments).


AFAICT, the documentation in question is not wrong (but is insufficient).

Again, I do not recommend changing any Squid configuration 
directives/options at this triage state.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-04 19:02, Jonathan Lee wrote:
I do not recommend changing your configuration at this time. I 
recommend rereading my earlier recommendation and following that 
instead: "As the next step in triage, I recommend determining what 
that CA is in these cases (e.g., by capturing raw TLS packets and 
matching them with connection information from A000417 error 
messages in cache.log or %err_detail in access.log)."


Ok I went back to 5.8 and ran the following command after I removed the 
changes I used does this help this is ran on the firewall side itself.


  openssl s_client -connect foxnews.com:443

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global 
Root CA


Did the above connection go through Squid? Sorry, I do not know whether 
"on the firewall side itself" implies a "yes" or "no" answer in this 
test case.




Does that help


It does not hurt, but it is not the information I have requested for the 
next triage step: I asked about the certificate corresponding to the 
A000417 error message in Squid v6.6. You are sharing the certificate 
corresponding to either a direct connection to the origin server or the 
certificate corresponding to a problem-free connection through Squid v5.8.



Should I regenerate a new certificate for the new version of Squid and 
redeploy them all to hosts again?


IMHO, on this thread, you should follow the recommended triage steps. If 
those recommendations are problematic, please discuss.


Alex.


On Jul 4, 2024, at 14:45, Alex Rousskov 
 wrote:


On 2024-07-04 15:37, Jonathan Lee wrote:


in Squid.conf I have nothing with that detective.


Sounds good; sslproxy_cert_sign default should work OK in most 
cases. I mentioned signUntrusted algorithm so that you can discover 
(from the corresponding sslproxy_cert_sign documentation) which 
CA/certificate Squid uses in which SslBump use case. Triage is often 
easier if folks share the same working theory, and my current 
working theory suggests that we are looking at a (default) 
signUntrusted use case.


The solution here probably does _not_ involve changing 
sslproxy_cert_sign configuration, but, to make progress, I need more 
info to confirm this working theory and describe next steps.




Yes I am using SSL bump with this configuration..


Noted, thank you.



So would I use this directive


I do not recommend changing your configuration at this time. I 
recommend rereading my earlier recommendation and following that 
instead: "As the next step in triage, I recommend determining what 
that CA is in these cases (e.g., by capturing raw TLS packets and 
matching them with connection information from A000417 error 
messages in cache.log or %err_detail in access.log)."



HTH,

Alex.



On Jul 4, 2024, at 09:56, Alex Rousskov wrote:

On 2024-07-04 12:11, Jonathan Lee wrote:
failure while accepting a TLS connection on conn5887 
local=192.168.1.1:3128

SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417


A000417 is an "unknown CA" alert sent by client to Squid while the 
client is trying to establish a TLS connection to/through Squid. 
The client does not trust the Certificate Authority that signed 
the certificate that was used for that TLS connection.


As the next step in triage, I recommend determining what that CA 
is in these cases (e.g., by capturing raw TLS packets and matching 
them with connection information from A000417 error messages in 
cache.log or %err_detail in access.log).


If you use SslBump for port 3128 traffic, then one of the 
possibilities here is that Squid is using an unknown-to-client CA 
to report an origin server that Squid itself does not trust (see 
signUntrusted in squid.conf.documented). In those cases, logging a 
level-1 ERROR is a Squid bug because that expected/desirable 
outcome should be treated as success (and a successful TLS accept 
treated as an error!).



HTH,

Alex.




Is my main concern however I use the squid guard URL blocker
Sent from my iPhone
On Jul 4, 2024, at 07:41, Alex Rousskov 
 wrote:


On 2024-07-03 13:56, Jonathan Lee wrote:

Hello fellow Squid users does anyone know how to fix this issue?


I counted about eight different "issues" in your cache.log 
sample. Most of them are probably independent. I recommend that 
you explicitly pick _one_, search mailing list archives for 
previous discussions about it, and then provide as many details 
about it as you can (e.g., what traffic causes it and/or 
matching access.log records).



HTH,

Alex.



Squid - Cache Logs
Date-Time    Message
31.12.1969 16:00:00
03.07.2024 10:54:34    kick abandoning 
conn7853 local=192.168.1.1:3128 remote=192.168.1.5:49710 FD 89 
flags=1

31.12.1969 16:00:00
03.07.2024 10:54:29    kick abandoning 
conn7844 local=192.168.1.1:3128 remote=192.168.1.5:49702 FD 81 
flags=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7648 local=192.168.1.1:3128 
remote=192.168.1.5:49672 FD 44 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-04 18:12, Jonathan Lee wrote:

I know before I could use

tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS


However with the update I am seeing

ERROR: Unsupported TLS option SINGLE_ECDH_USE


FWIW, I can only (try to) help with one problem at this time. If you 
really want to attack two problems concurrently, I recommend starting 
another/new thread dedicated to the above problem. Others may be able to 
help you on that other thread.


Alex.

I found researching in lists-squid-cache.org 
 that someone solved this with appending 
TLS13-AES-256-CGM-SHA384 to the ciphers.


I am thinking this is my issue also.

I see that error over and over when I run "squid -k parse”

Do I append this to the options cipher list?

Jonathan Lee

On Jul 4, 2024, at 14:45, Alex Rousskov 
 wrote:


On 2024-07-04 15:37, Jonathan Lee wrote:


in Squid.conf I have nothing with that detective.


Sounds good; sslproxy_cert_sign default should work OK in most cases. 
I mentioned signUntrusted algorithm so that you can discover (from the 
corresponding sslproxy_cert_sign documentation) which CA/certificate 
Squid uses in which SslBump use case. Triage is often easier if folks 
share the same working theory, and my current working theory suggests 
that we are looking at a (default) signUntrusted use case.


The solution here probably does _not_ involve changing 
sslproxy_cert_sign configuration, but, to make progress, I need more 
info to confirm this working theory and describe next steps.




Yes I am using SSL bump with this configuration..


Noted, thank you.



So would I use this directive


I do not recommend changing your configuration at this time. I 
recommend rereading my earlier recommendation and following that 
instead: "As the next step in triage, I recommend determining what 
that CA is in these cases (e.g., by capturing raw TLS packets and 
matching them with connection information from A000417 error messages 
in cache.log or %err_detail in access.log)."



HTH,

Alex.



On Jul 4, 2024, at 09:56, Alex Rousskov wrote:

On 2024-07-04 12:11, Jonathan Lee wrote:
failure while accepting a TLS connection on conn5887 
local=192.168.1.1:3128

SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417


A000417 is an "unknown CA" alert sent by client to Squid while the 
client is trying to establish a TLS connection to/through Squid. The 
client does not trust the Certificate Authority that signed the 
certificate that was used for that TLS connection.


As the next step in triage, I recommend determining what that CA is 
in these cases (e.g., by capturing raw TLS packets and matching them 
with connection information from A000417 error messages in cache.log 
or %err_detail in access.log).


If you use SslBump for port 3128 traffic, then one of the 
possibilities here is that Squid is using an unknown-to-client CA to 
report an origin server that Squid itself does not trust (see 
signUntrusted in squid.conf.documented). In those cases, logging a 
level-1 ERROR is a Squid bug because that expected/desirable outcome 
should be treated as success (and a successful TLS accept treated as 
an error!).



HTH,

Alex.




Is my main concern however I use the squid guard URL blocker
Sent from my iPhone
On Jul 4, 2024, at 07:41, Alex Rousskov 
 wrote:


On 2024-07-03 13:56, Jonathan Lee wrote:

Hello fellow Squid users does anyone know how to fix this issue?


I counted about eight different "issues" in your cache.log sample. 
Most of them are probably independent. I recommend that you 
explicitly pick _one_, search mailing list archives for previous 
discussions about it, and then provide as many details about it as 
you can (e.g., what traffic causes it and/or matching access.log 
records).



HTH,

Alex.



Squid - Cache Logs
Date-Time    Message
31.12.1969 16:00:00
03.07.2024 10:54:34    kick abandoning 
conn7853 local=192.168.1.1:3128 remote=192.168.1.5:49710 FD 89 
flags=1

31.12.1969 16:00:00
03.07.2024 10:54:29    kick abandoning 
conn7844 local=192.168.1.1:3128 remote=192.168.1.5:49702 FD 81 
flags=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7648 local=192.168.1.1:3128 
remote=192.168.1.5:49672 FD 44 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7647 local=192.168.1.1:3128 
remote=192.168.1.5:49670 FD 43 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7646 local=192.168.1.1:3128 
remote=192.168.1.5:49668 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:53:04    ERROR: failure while accepting a TLS 
connection on conn7367 local=192.168.1.1:3128 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
It does not recognize this directive 

2024/07/04 16:16:46| Processing: url_rewrite_children 32 startup=8 idle=4 
concurrency=0
2024/07/04 16:16:46| Processing: tls-default-ca on
2024/07/04 16:16:46| /usr/local/etc/squid/squid.conf(235): unrecognized: 
'tls-default-ca’

Or with use of =

> On Jul 4, 2024, at 16:12, Jonathan Lee  wrote:
> 
> You also stated .. " my current working theory suggests that we are looking 
> at a (default) signUntrusted use case.”
> 
> I noticed for Squid documents that default is now set to off ..
> 
> http://www.squid-cache.org/Versions/v5/cfgman/http_port.html
> 
> http://www.squid-cache.org/Versions/v6/cfgman/http_port.html
>   tls-default-ca[=off]
>   Whether to use the system Trusted CAs. Default is OFF.
> Would enabling this resolve the problem in Squid 6.6 for error. In theory if 
> default is auto off and it is attempting to use one it would be untrusted 
> automagically after migration
> 
>> On Jul 4, 2024, at 14:45, Alex Rousskov  
>> wrote:
>> 
>>  my current working theory suggests that we are looking at a (default) 
>> signUntrusted use case.
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
You also stated .. " my current working theory suggests that we are looking at 
a (default) signUntrusted use case.”

I noticed for Squid documents that default is now set to off ..

http://www.squid-cache.org/Versions/v5/cfgman/http_port.html

http://www.squid-cache.org/Versions/v6/cfgman/http_port.html
  tls-default-ca[=off]
Whether to use the system Trusted CAs. Default is OFF.
Would enabling this resolve the problem in Squid 6.6 for error. In theory if 
default is auto off and it is attempting to use one it would be untrusted 
automagically after migration

> On Jul 4, 2024, at 14:45, Alex Rousskov  
> wrote:
> 
>  my current working theory suggests that we are looking at a (default) 
> signUntrusted use case.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
>>> I do not recommend changing your configuration at this time. I recommend 
>>> rereading my earlier recommendation and following that instead: "As the 
>>> next step in triage, I recommend determining what that CA is in these cases 
>>> (e.g., by capturing raw TLS packets and matching them with connection 
>>> information from A000417 error messages in cache.log or %err_detail in 
>>> access.log)."


Ok I went back to 5.8 and ran the following command after I removed the changes 
I used does this help this is ran on the firewall side itself. 

 openssl s_client -connect foxnews.com:443 

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global 
Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
verify return:1
depth=0 C = US, ST = New York, L = New York, O = "Fox News Network, LLC", CN = 
wildcard.foxnews.com
verify return:1
CONNECTED(0004)
---
Certificate chain
 0 s:C = US, ST = New York, L = New York, O = "Fox News Network, LLC", CN = 
wildcard.foxnews.com
   i:C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
 1 s:C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
   i:C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root 
CA

-END CERTIFICATE-
subject=C = US, ST = New York, L = New York, O = "Fox News Network, LLC", CN = 
wildcard.foxnews.com

issuer=C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 4198 bytes and written 393 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
DONE

Does that help I am not going to pretend I understand TLS options I do 
understand how the SSL ciphers work and certificates but all the different 
options and kinds are what is confusing me. I did not seem to have this error 
before.


Should I regenerate a new certificate for the new version of Squid and redeploy 
them all to hosts again? I used this method in the past and it worked for a 
long time after I imported it. I am wondering if this is outdated now

openssl req -x509 -new -nodes -key myProxykey.key -sha256 -days 365 -out 
myProxyca.pem


> On Jul 4, 2024, at 15:13, Jonathan Lee  wrote:
> 
> Sorry 
> 
> tls_outgoing_options 
> cipher=HIGH:MEDIUM:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
> tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
> 
> Would I add this here?
> 
>> On Jul 4, 2024, at 15:12, Jonathan Lee  wrote:
>> 
>> I know before I could use 
>> 
>> tls_outgoing_options 
>> cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
>> 
>> However with the update I am seeing 
>> 
>> ERROR: Unsupported TLS option SINGLE_ECDH_USE
>> 
>> I found researching in lists-squid-cache.org  
>> that someone solved this with appending TLS13-AES-256-CGM-SHA384 to the 
>> ciphers. 
>> 
>> I am thinking this is my issue also.
>> 
>> I see that error over and over when I run "squid -k parse”
>> 
>> Do I append this to the options cipher list?
>> 
>> Jonathan Lee
>> 
>>> On Jul 4, 2024, at 14:45, Alex Rousskov  
>>> wrote:
>>> 
>>> On 2024-07-04 15:37, Jonathan Lee wrote:
>>> 
 in Squid.conf I have nothing with that detective.
>>> 
>>> Sounds good; sslproxy_cert_sign default should work OK in most cases. I 
>>> mentioned signUntrusted algorithm so that you can discover (from the 
>>> corresponding sslproxy_cert_sign documentation) which CA/certificate Squid 
>>> uses in which SslBump use case. Triage is often easier if folks share the 
>>> same working theory, and my current working theory suggests that we are 
>>> looking at a (default) signUntrusted use case.
>>> 
>>> The solution here probably does _not_ involve changing sslproxy_cert_sign 
>>> configuration, but, to make progress, I need more info to confirm this 
>>> working theory and describe next steps.
>>> 
>>> 
 Yes I am using SSL bump with this configuration..
>>> 
>>> Noted, thank you.
>>> 
>>> 
 So would I use this directive
>>> 
>>> I do not recommend changing your configuration at this time. I recommend 
>>> rereading my earlier recommendation and following that instead: "As the 
>>> next step in triage, I recommend determining what that CA is in these cases 
>>> (e.g., by capturing raw TLS packets and matching them with connection 
>>> information from A000417 error messages in cache.log or %err_detail in 
>>> access.log)."
>>> 
>>> 
>>> HTH,
>>> 
>>> Alex.
>>> 
>>> 
> On Jul 4, 2024, at 09:56, Alex Rousskov wrote:
> 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
Sorry 

tls_outgoing_options 
cipher=HIGH:MEDIUM:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

Would I add this here?

> On Jul 4, 2024, at 15:12, Jonathan Lee  wrote:
> 
> I know before I could use 
> 
> tls_outgoing_options 
> cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
> 
> However with the update I am seeing 
> 
> ERROR: Unsupported TLS option SINGLE_ECDH_USE
> 
> I found researching in lists-squid-cache.org  
> that someone solved this with appending TLS13-AES-256-CGM-SHA384 to the 
> ciphers. 
> 
> I am thinking this is my issue also.
> 
> I see that error over and over when I run "squid -k parse”
> 
> Do I append this to the options cipher list?
> 
> Jonathan Lee
> 
>> On Jul 4, 2024, at 14:45, Alex Rousskov  
>> wrote:
>> 
>> On 2024-07-04 15:37, Jonathan Lee wrote:
>> 
>>> in Squid.conf I have nothing with that detective.
>> 
>> Sounds good; sslproxy_cert_sign default should work OK in most cases. I 
>> mentioned signUntrusted algorithm so that you can discover (from the 
>> corresponding sslproxy_cert_sign documentation) which CA/certificate Squid 
>> uses in which SslBump use case. Triage is often easier if folks share the 
>> same working theory, and my current working theory suggests that we are 
>> looking at a (default) signUntrusted use case.
>> 
>> The solution here probably does _not_ involve changing sslproxy_cert_sign 
>> configuration, but, to make progress, I need more info to confirm this 
>> working theory and describe next steps.
>> 
>> 
>>> Yes I am using SSL bump with this configuration..
>> 
>> Noted, thank you.
>> 
>> 
>>> So would I use this directive
>> 
>> I do not recommend changing your configuration at this time. I recommend 
>> rereading my earlier recommendation and following that instead: "As the next 
>> step in triage, I recommend determining what that CA is in these cases 
>> (e.g., by capturing raw TLS packets and matching them with connection 
>> information from A000417 error messages in cache.log or %err_detail in 
>> access.log)."
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> 
 On Jul 4, 2024, at 09:56, Alex Rousskov wrote:
 
 On 2024-07-04 12:11, Jonathan Lee wrote:
> failure while accepting a TLS connection on conn5887 
> local=192.168.1.1:3128
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417
 
 A000417 is an "unknown CA" alert sent by client to Squid while the client 
 is trying to establish a TLS connection to/through Squid. The client does 
 not trust the Certificate Authority that signed the certificate that was 
 used for that TLS connection.
 
 As the next step in triage, I recommend determining what that CA is in 
 these cases (e.g., by capturing raw TLS packets and matching them with 
 connection information from A000417 error messages in cache.log or 
 %err_detail in access.log).
 
 If you use SslBump for port 3128 traffic, then one of the possibilities 
 here is that Squid is using an unknown-to-client CA to report an origin 
 server that Squid itself does not trust (see signUntrusted in 
 squid.conf.documented). In those cases, logging a level-1 ERROR is a Squid 
 bug because that expected/desirable outcome should be treated as success 
 (and a successful TLS accept treated as an error!).
 
 
 HTH,
 
 Alex.
>> 
>> 
> Is my main concern however I use the squid guard URL blocker
> Sent from my iPhone
>> On Jul 4, 2024, at 07:41, Alex Rousskov 
>>  wrote:
>> 
>> On 2024-07-03 13:56, Jonathan Lee wrote:
>>> Hello fellow Squid users does anyone know how to fix this issue?
>> 
>> I counted about eight different "issues" in your cache.log sample. Most 
>> of them are probably independent. I recommend that you explicitly pick 
>> _one_, search mailing list archives for previous discussions about it, 
>> and then provide as many details about it as you can (e.g., what traffic 
>> causes it and/or matching access.log records).
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> 
>>> Squid - Cache Logs
>>> Date-TimeMessage
>>> 31.12.1969 16:00:00
>>> 03.07.2024 10:54:34kick abandoning conn7853 local=192.168.1.1:3128 
>>> remote=192.168.1.5:49710 FD 89 flags=1
>>> 31.12.1969 16:00:00
>>> 03.07.2024 10:54:29kick abandoning conn7844 local=192.168.1.1:3128 
>>> remote=192.168.1.5:49702 FD 81 flags=1
>>> 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection 
>>> on conn7648 local=192.168.1.1:3128 remote=192.168.1.5:49672 FD 44 
>>> flags=1: SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
>>> 03.07.2024 10:54:09ERROR: failure while accepting a TLS 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
I know before I could use 

tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS

However with the update I am seeing 

ERROR: Unsupported TLS option SINGLE_ECDH_USE

I found researching in lists-squid-cache.org  
that someone solved this with appending TLS13-AES-256-CGM-SHA384 to the 
ciphers. 

I am thinking this is my issue also.

I see that error over and over when I run "squid -k parse”

Do I append this to the options cipher list?

Jonathan Lee

> On Jul 4, 2024, at 14:45, Alex Rousskov  
> wrote:
> 
> On 2024-07-04 15:37, Jonathan Lee wrote:
> 
>> in Squid.conf I have nothing with that detective.
> 
> Sounds good; sslproxy_cert_sign default should work OK in most cases. I 
> mentioned signUntrusted algorithm so that you can discover (from the 
> corresponding sslproxy_cert_sign documentation) which CA/certificate Squid 
> uses in which SslBump use case. Triage is often easier if folks share the 
> same working theory, and my current working theory suggests that we are 
> looking at a (default) signUntrusted use case.
> 
> The solution here probably does _not_ involve changing sslproxy_cert_sign 
> configuration, but, to make progress, I need more info to confirm this 
> working theory and describe next steps.
> 
> 
>> Yes I am using SSL bump with this configuration..
> 
> Noted, thank you.
> 
> 
>> So would I use this directive
> 
> I do not recommend changing your configuration at this time. I recommend 
> rereading my earlier recommendation and following that instead: "As the next 
> step in triage, I recommend determining what that CA is in these cases (e.g., 
> by capturing raw TLS packets and matching them with connection information 
> from A000417 error messages in cache.log or %err_detail in access.log)."
> 
> 
> HTH,
> 
> Alex.
> 
> 
>>> On Jul 4, 2024, at 09:56, Alex Rousskov wrote:
>>> 
>>> On 2024-07-04 12:11, Jonathan Lee wrote:
 failure while accepting a TLS connection on conn5887 local=192.168.1.1:3128
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417
>>> 
>>> A000417 is an "unknown CA" alert sent by client to Squid while the client 
>>> is trying to establish a TLS connection to/through Squid. The client does 
>>> not trust the Certificate Authority that signed the certificate that was 
>>> used for that TLS connection.
>>> 
>>> As the next step in triage, I recommend determining what that CA is in 
>>> these cases (e.g., by capturing raw TLS packets and matching them with 
>>> connection information from A000417 error messages in cache.log or 
>>> %err_detail in access.log).
>>> 
>>> If you use SslBump for port 3128 traffic, then one of the possibilities 
>>> here is that Squid is using an unknown-to-client CA to report an origin 
>>> server that Squid itself does not trust (see signUntrusted in 
>>> squid.conf.documented). In those cases, logging a level-1 ERROR is a Squid 
>>> bug because that expected/desirable outcome should be treated as success 
>>> (and a successful TLS accept treated as an error!).
>>> 
>>> 
>>> HTH,
>>> 
>>> Alex.
> 
> 
 Is my main concern however I use the squid guard URL blocker
 Sent from my iPhone
> On Jul 4, 2024, at 07:41, Alex Rousskov 
>  wrote:
> 
> On 2024-07-03 13:56, Jonathan Lee wrote:
>> Hello fellow Squid users does anyone know how to fix this issue?
> 
> I counted about eight different "issues" in your cache.log sample. Most 
> of them are probably independent. I recommend that you explicitly pick 
> _one_, search mailing list archives for previous discussions about it, 
> and then provide as many details about it as you can (e.g., what traffic 
> causes it and/or matching access.log records).
> 
> 
> HTH,
> 
> Alex.
> 
> 
>> Squid - Cache Logs
>> Date-TimeMessage
>> 31.12.1969 16:00:00
>> 03.07.2024 10:54:34kick abandoning conn7853 local=192.168.1.1:3128 
>> remote=192.168.1.5:49710 FD 89 flags=1
>> 31.12.1969 16:00:00
>> 03.07.2024 10:54:29kick abandoning conn7844 local=192.168.1.1:3128 
>> remote=192.168.1.5:49702 FD 81 flags=1
>> 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection 
>> on conn7648 local=192.168.1.1:3128 remote=192.168.1.5:49672 FD 44 
>> flags=1: SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
>> 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection 
>> on conn7647 local=192.168.1.1:3128 remote=192.168.1.5:49670 FD 43 
>> flags=1: SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
>> 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection 
>> on conn7646 local=192.168.1.1:3128 remote=192.168.1.5:49668 FD 34 
>> flags=1: SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
>> 03.07.2024 10:53:04  

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Alex Rousskov

On 2024-07-04 15:37, Jonathan Lee wrote:


in Squid.conf I have nothing with that detective.


Sounds good; sslproxy_cert_sign default should work OK in most cases. I 
mentioned signUntrusted algorithm so that you can discover (from the 
corresponding sslproxy_cert_sign documentation) which CA/certificate 
Squid uses in which SslBump use case. Triage is often easier if folks 
share the same working theory, and my current working theory suggests 
that we are looking at a (default) signUntrusted use case.


The solution here probably does _not_ involve changing 
sslproxy_cert_sign configuration, but, to make progress, I need more 
info to confirm this working theory and describe next steps.




Yes I am using SSL bump with this configuration..


Noted, thank you.


So would I use this directive 


I do not recommend changing your configuration at this time. I recommend 
rereading my earlier recommendation and following that instead: "As the 
next step in triage, I recommend determining what that CA is in these 
cases (e.g., by capturing raw TLS packets and matching them with 
connection information from A000417 error messages in cache.log or 
%err_detail in access.log)."



HTH,

Alex.



On Jul 4, 2024, at 09:56, Alex Rousskov wrote:

On 2024-07-04 12:11, Jonathan Lee wrote:
failure while accepting a TLS connection on conn5887 
local=192.168.1.1:3128

SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417


A000417 is an "unknown CA" alert sent by client to Squid while the 
client is trying to establish a TLS connection to/through Squid. The 
client does not trust the Certificate Authority that signed the 
certificate that was used for that TLS connection.


As the next step in triage, I recommend determining what that CA is in 
these cases (e.g., by capturing raw TLS packets and matching them with 
connection information from A000417 error messages in cache.log or 
%err_detail in access.log).


If you use SslBump for port 3128 traffic, then one of the 
possibilities here is that Squid is using an unknown-to-client CA to 
report an origin server that Squid itself does not trust (see 
signUntrusted in squid.conf.documented). In those cases, logging a 
level-1 ERROR is a Squid bug because that expected/desirable outcome 
should be treated as success (and a successful TLS accept treated as 
an error!).



HTH,

Alex.




Is my main concern however I use the squid guard URL blocker
Sent from my iPhone
On Jul 4, 2024, at 07:41, Alex Rousskov 
 wrote:


On 2024-07-03 13:56, Jonathan Lee wrote:

Hello fellow Squid users does anyone know how to fix this issue?


I counted about eight different "issues" in your cache.log sample. 
Most of them are probably independent. I recommend that you 
explicitly pick _one_, search mailing list archives for previous 
discussions about it, and then provide as many details about it as 
you can (e.g., what traffic causes it and/or matching access.log 
records).



HTH,

Alex.



Squid - Cache Logs
Date-Time    Message
31.12.1969 16:00:00
03.07.2024 10:54:34    kick abandoning 
conn7853 local=192.168.1.1:3128 remote=192.168.1.5:49710 FD 89 flags=1

31.12.1969 16:00:00
03.07.2024 10:54:29    kick abandoning 
conn7844 local=192.168.1.1:3128 remote=192.168.1.5:49702 FD 81 flags=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7648 local=192.168.1.1:3128 
remote=192.168.1.5:49672 FD 44 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7647 local=192.168.1.1:3128 
remote=192.168.1.5:49670 FD 43 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7646 local=192.168.1.1:3128 
remote=192.168.1.5:49668 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:53:04    ERROR: failure while accepting a TLS 
connection on conn7367 local=192.168.1.1:3128 
remote=192.168.1.5:49627 FD 22 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:47    ERROR: failure while accepting a TLS 
connection on conn7345 local=192.168.1.1:3128 
remote=192.168.1.5:49618 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:38    ERROR: failure while accepting a TLS 
connection on conn7340 local=192.168.1.1:3128 
remote=192.168.1.5:49616 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:52:34    ERROR: failure while accepting a TLS 
connection on conn7316 local=192.168.1.1:3128 
remote=192.168.1.5:49609 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1

31.12.1969 16:00:00
03.07.2024 10:51:55    WARNING: Error Pages Missing Language: en-us
31.12.1969 16:00:00
03.07.2024 10:51:55    ERROR: loading file 
9;/usr/local/etc/squid/errors/en-us/ERR_ZERO_SIZE_OBJECT': (2) No 
such file or directory
03.07.2024 10:51:44    ERROR: failure while accepting a TLS 
connection on conn7102 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
Maybe adding it like this …

sslproxy_cert_sign signTrusted bump_only_mac https_login splice_only_mac 
NoBumpDNS NoSSLIntercept
ssl_bump peek step1
miss_access deny no_miss active_use
ssl_bump splice https_login active_use
ssl_bump splice splice_only_mac splice_only active_use
ssl_bump splice NoBumpDNS active_use
ssl_bump splice NoSSLIntercept active_use
ssl_bump bump bump_only_mac bump_only active_use
acl activated note active_use true
ssl_bump terminate !activated

acl markedBumped note bumped true
url_rewrite_access deny markedBumped


> On Jul 4, 2024, at 09:56, Alex Rousskov  
> wrote:
> 
> On 2024-07-04 12:11, Jonathan Lee wrote:
>> failure while accepting a TLS connection on conn5887 local=192.168.1.1:3128
>> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417
> 
> A000417 is an "unknown CA" alert sent by client to Squid while the client is 
> trying to establish a TLS connection to/through Squid. The client does not 
> trust the Certificate Authority that signed the certificate that was used for 
> that TLS connection.
> 
> As the next step in triage, I recommend determining what that CA is in these 
> cases (e.g., by capturing raw TLS packets and matching them with connection 
> information from A000417 error messages in cache.log or %err_detail in 
> access.log).
> 
> If you use SslBump for port 3128 traffic, then one of the possibilities here 
> is that Squid is using an unknown-to-client CA to report an origin server 
> that Squid itself does not trust (see signUntrusted in 
> squid.conf.documented). In those cases, logging a level-1 ERROR is a Squid 
> bug because that expected/desirable outcome should be treated as success (and 
> a successful TLS accept treated as an error!).
> 
> 
> HTH,
> 
> Alex.
> P.S. For free Squid support, please keep the discussion on the mailing list.
> 
> 
>> Is my main concern however I use the squid guard URL blocker
>> Sent from my iPhone
>>> On Jul 4, 2024, at 07:41, Alex Rousskov  
>>> wrote:
>>> 
>>> On 2024-07-03 13:56, Jonathan Lee wrote:
 Hello fellow Squid users does anyone know how to fix this issue?
>>> 
>>> I counted about eight different "issues" in your cache.log sample. Most of 
>>> them are probably independent. I recommend that you explicitly pick _one_, 
>>> search mailing list archives for previous discussions about it, and then 
>>> provide as many details about it as you can (e.g., what traffic causes it 
>>> and/or matching access.log records).
>>> 
>>> 
>>> HTH,
>>> 
>>> Alex.
>>> 
>>> 
 Squid - Cache Logs
 Date-TimeMessage
 31.12.1969 16:00:00
 03.07.2024 10:54:34kick abandoning conn7853 local=192.168.1.1:3128 
 remote=192.168.1.5:49710 FD 89 flags=1
 31.12.1969 16:00:00
 03.07.2024 10:54:29kick abandoning conn7844 local=192.168.1.1:3128 
 remote=192.168.1.5:49702 FD 81 flags=1
 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection on 
 conn7648 local=192.168.1.1:3128 remote=192.168.1.5:49672 FD 44 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection on 
 conn7647 local=192.168.1.1:3128 remote=192.168.1.5:49670 FD 43 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 03.07.2024 10:54:09ERROR: failure while accepting a TLS connection on 
 conn7646 local=192.168.1.1:3128 remote=192.168.1.5:49668 FD 34 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 03.07.2024 10:53:04ERROR: failure while accepting a TLS connection on 
 conn7367 local=192.168.1.1:3128 remote=192.168.1.5:49627 FD 22 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 03.07.2024 10:52:47ERROR: failure while accepting a TLS connection on 
 conn7345 local=192.168.1.1:3128 remote=192.168.1.5:49618 FD 31 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 03.07.2024 10:52:38ERROR: failure while accepting a TLS connection on 
 conn7340 local=192.168.1.1:3128 remote=192.168.1.5:49616 FD 45 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
 03.07.2024 10:52:34ERROR: failure while accepting a TLS connection on 
 conn7316 local=192.168.1.1:3128 remote=192.168.1.5:49609 FD 45 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 31.12.1969 16:00:00
 03.07.2024 10:51:55WARNING: Error Pages Missing Language: en-us
 31.12.1969 16:00:00
 03.07.2024 10:51:55ERROR: loading file 
 9;/usr/local/etc/squid/errors/en-us/ERR_ZERO_SIZE_OBJECT': (2) No such 
 file or directory
 03.07.2024 10:51:44ERROR: failure while accepting a TLS connection on 
 conn7102 local=192.168.1.1:3128 remote=192.168.1.5:49574 FD 34 flags=1: 
 SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
 03.07.2024 10:51:28ERROR: failure while accepting a TLS connection on 
 conn7071 local=192.168.1.1:3128 remote=192.168.1.5:49568 FD 92 flags=1: 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Jonathan Lee
I found it 

#  TAG: sslproxy_cert_sign
#
#sslproxy_cert_sign  acl ...
#
#The following certificate signing algorithms are supported:
#
#  signTrusted
#   Sign using the configured CA certificate which is usually
#   placed in and trusted by end-user browsers. This is the
#   default for trusted origin server certificates.
#
#  signUntrusted
#   Sign to guarantee an X509_V_ERR_CERT_UNTRUSTED browser error.
#   This is the default for untrusted origin server certificates
#   that are not self-signed (see ssl::certUntrusted).
#
#  signSelf
#   Sign using a self-signed certificate with the right CN to
#   generate a X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT error in the
#   browser. This is the default for self-signed origin server
#   certificates (see ssl::certSelfSigned).
#
#   This clause only supports fast acl types.
#
#   When sslproxy_cert_sign acl(s) match, Squid uses the corresponding
#   signing algorithm to generate the certificate and ignores all
#   subsequent sslproxy_cert_sign options (the first match wins). If no
#   acl(s) match, the default signing algorithm is determined by errors
#   detected when obtaining and validating the origin server certificate.
#
#   WARNING: SQUID_X509_V_ERR_DOMAIN_MISMATCH and ssl:certDomainMismatch can
#   be used with sslproxy_cert_adapt, but if and only if Squid is bumping a
#   CONNECT request that carries a domain name. In all other cases (CONNECT
#   to an IP address or an intercepted SSL connection), Squid cannot detect
#   the domain mismatch at certificate generation time when
#   bump-server-first is used.
#Default:
# none

in Squid.conf I have nothing with that detective. 

Yes I am using SSL bump with this configuration..


# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

icp_port 0
digest_generation off
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname Lee_Family.home.arpa
cache_mgr jonathanlee...@gmail.com
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger
sslcrtd_program /usr/local/libexec/squid/security_file_certgen -s 
/var/squid/lib/ssl_db -M 4MB -b 2048
tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
sslcrtd_children 10

logfile_rotate 7
debug_options rotate=7
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27
forwarded_for transparent
httpd_suppress_version_string on
uri_whitespace strip
dns_nameservers 127.0.0.1 
acl block_hours time 00:30-05:00
ssl_bump terminate all block_hours
http_access deny all block_hours
acl getmethod method GET
acl to_ipv6 dst ipv6
acl 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Alex Rousskov

On 2024-07-04 12:11, Jonathan Lee wrote:

failure while accepting a TLS connection on conn5887 local=192.168.1.1:3128
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417


A000417 is an "unknown CA" alert sent by client to Squid while the 
client is trying to establish a TLS connection to/through Squid. The 
client does not trust the Certificate Authority that signed the 
certificate that was used for that TLS connection.


As the next step in triage, I recommend determining what that CA is in 
these cases (e.g., by capturing raw TLS packets and matching them with 
connection information from A000417 error messages in cache.log or 
%err_detail in access.log).


If you use SslBump for port 3128 traffic, then one of the possibilities 
here is that Squid is using an unknown-to-client CA to report an origin 
server that Squid itself does not trust (see signUntrusted in 
squid.conf.documented). In those cases, logging a level-1 ERROR is a 
Squid bug because that expected/desirable outcome should be treated as 
success (and a successful TLS accept treated as an error!).



HTH,

Alex.
P.S. For free Squid support, please keep the discussion on the mailing list.



Is my main concern however I use the squid guard URL blocker
Sent from my iPhone

On Jul 4, 2024, at 07:41, Alex Rousskov 
 wrote:


On 2024-07-03 13:56, Jonathan Lee wrote:

Hello fellow Squid users does anyone know how to fix this issue?


I counted about eight different "issues" in your cache.log sample. 
Most of them are probably independent. I recommend that you explicitly 
pick _one_, search mailing list archives for previous discussions 
about it, and then provide as many details about it as you can (e.g., 
what traffic causes it and/or matching access.log records).



HTH,

Alex.



Squid - Cache Logs
Date-Time    Message
31.12.1969 16:00:00
03.07.2024 10:54:34    kick abandoning 
conn7853 local=192.168.1.1:3128 remote=192.168.1.5:49710 FD 89 flags=1

31.12.1969 16:00:00
03.07.2024 10:54:29    kick abandoning 
conn7844 local=192.168.1.1:3128 remote=192.168.1.5:49702 FD 81 flags=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7648 local=192.168.1.1:3128 
remote=192.168.1.5:49672 FD 44 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7647 local=192.168.1.1:3128 
remote=192.168.1.5:49670 FD 43 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7646 local=192.168.1.1:3128 
remote=192.168.1.5:49668 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:53:04    ERROR: failure while accepting a TLS 
connection on conn7367 local=192.168.1.1:3128 
remote=192.168.1.5:49627 FD 22 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:47    ERROR: failure while accepting a TLS 
connection on conn7345 local=192.168.1.1:3128 
remote=192.168.1.5:49618 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:38    ERROR: failure while accepting a TLS 
connection on conn7340 local=192.168.1.1:3128 
remote=192.168.1.5:49616 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:52:34    ERROR: failure while accepting a TLS 
connection on conn7316 local=192.168.1.1:3128 
remote=192.168.1.5:49609 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1

31.12.1969 16:00:00
03.07.2024 10:51:55    WARNING: Error Pages Missing Language: en-us
31.12.1969 16:00:00
03.07.2024 10:51:55    ERROR: loading file 
9;/usr/local/etc/squid/errors/en-us/ERR_ZERO_SIZE_OBJECT': (2) No 
such file or directory
03.07.2024 10:51:44    ERROR: failure while accepting a TLS 
connection on conn7102 local=192.168.1.1:3128 
remote=192.168.1.5:49574 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:51:28    ERROR: failure while accepting a TLS 
connection on conn7071 local=192.168.1.1:3128 
remote=192.168.1.5:49568 FD 92 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:50:29    ERROR: failure while accepting a TLS 
connection on conn6944 local=192.168.1.1:3128 
remote=192.168.1.5:49534 FD 101 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:49:54    ERROR: failure while accepting a TLS 
connection on conn6866 local=192.168.1.1:3128 
remote=192.168.1.5:49519 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:49:38    ERROR: failure while accepting a TLS 
connection on conn6809 local=192.168.1.1:3128 
remote=192.168.1.5:49503 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1

31.12.1969 16:00:00
03.07.2024 10:49:32    ERROR: system call failure while accepting a 
TLS connection on conn6794 local=192.168.1.1:3128 
remote=192.168.1.5:49496 FD 19 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_IO_ERR=5+errno=54
03.07.2024 10:49:24    

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-04 Thread Alex Rousskov

On 2024-07-03 13:56, Jonathan Lee wrote:

Hello fellow Squid users does anyone know how to fix this issue?


I counted about eight different "issues" in your cache.log sample. Most 
of them are probably independent. I recommend that you explicitly pick 
_one_, search mailing list archives for previous discussions about it, 
and then provide as many details about it as you can (e.g., what traffic 
causes it and/or matching access.log records).



HTH,

Alex.



Squid - Cache Logs
Date-Time   Message
31.12.1969 16:00:00 
03.07.2024 10:54:34	kick abandoning conn7853 local=192.168.1.1:3128 
remote=192.168.1.5:49710 FD 89 flags=1

31.12.1969 16:00:00 
03.07.2024 10:54:29	kick abandoning conn7844 local=192.168.1.1:3128 
remote=192.168.1.5:49702 FD 81 flags=1
03.07.2024 10:54:09	ERROR: failure while accepting a TLS connection on 
conn7648 local=192.168.1.1:3128 remote=192.168.1.5:49672 FD 44 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09	ERROR: failure while accepting a TLS connection on 
conn7647 local=192.168.1.1:3128 remote=192.168.1.5:49670 FD 43 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09	ERROR: failure while accepting a TLS connection on 
conn7646 local=192.168.1.1:3128 remote=192.168.1.5:49668 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:53:04	ERROR: failure while accepting a TLS connection on 
conn7367 local=192.168.1.1:3128 remote=192.168.1.5:49627 FD 22 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:47	ERROR: failure while accepting a TLS connection on 
conn7345 local=192.168.1.1:3128 remote=192.168.1.5:49618 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:38	ERROR: failure while accepting a TLS connection on 
conn7340 local=192.168.1.1:3128 remote=192.168.1.5:49616 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:52:34	ERROR: failure while accepting a TLS connection on 
conn7316 local=192.168.1.1:3128 remote=192.168.1.5:49609 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1

31.12.1969 16:00:00 
03.07.2024 10:51:55 WARNING: Error Pages Missing Language: en-us
31.12.1969 16:00:00 
03.07.2024 10:51:55	ERROR: loading file 
9;/usr/local/etc/squid/errors/en-us/ERR_ZERO_SIZE_OBJECT': (2) No 
such file or directory
03.07.2024 10:51:44	ERROR: failure while accepting a TLS connection on 
conn7102 local=192.168.1.1:3128 remote=192.168.1.5:49574 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:51:28	ERROR: failure while accepting a TLS connection on 
conn7071 local=192.168.1.1:3128 remote=192.168.1.5:49568 FD 92 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:50:29	ERROR: failure while accepting a TLS connection on 
conn6944 local=192.168.1.1:3128 remote=192.168.1.5:49534 FD 101 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:49:54	ERROR: failure while accepting a TLS connection on 
conn6866 local=192.168.1.1:3128 remote=192.168.1.5:49519 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:49:38	ERROR: failure while accepting a TLS connection on 
conn6809 local=192.168.1.1:3128 remote=192.168.1.5:49503 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1

31.12.1969 16:00:00 
03.07.2024 10:49:32	ERROR: system call failure while accepting a TLS 
connection on conn6794 local=192.168.1.1:3128 remote=192.168.1.5:49496 
FD 19 flags=1: SQUID_TLS_ERR_ACCEPT+TLS_IO_ERR=5+errno=54
03.07.2024 10:49:24	ERROR: failure while accepting a TLS connection on 
conn6776 local=192.168.1.1:3128 remote=192.168.1.5:49481 FD 137 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:48:49	ERROR: failure while accepting a TLS connection on 
conn6440 local=192.168.1.1:3128 remote=192.168.1.5:49424 FD 16 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:48:49	ERROR: failure while accepting a TLS connection on 
conn6445 local=192.168.1.1:3128 remote=192.168.1.5:49426 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:48:22	ERROR: failure while accepting a TLS connection on 
conn6035 local=192.168.1.1:3128 remote=192.168.1.5:49355 FD 226 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:48:09	ERROR: failure while accepting a TLS connection on 
conn5887 local=192.168.1.1:3128 remote=192.168.1.5:49318 FD 33 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:48:09	ERROR: failure while accepting a TLS connection on 
conn5875 local=192.168.1.1:3128 remote=192.168.1.5:49312 FD 216 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:48:09	ERROR: failure while accepting a TLS connection on 
conn5876 local=192.168.1.1:3128 remote=192.168.1.5:49314 FD 217 flags=1: 

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-03 Thread Jonathan Lee
I forgot to mention my certificates I use on squid was generated from this 
method 

openssl req -x509 -new -nodes -key myProxykey.key -sha256 -days 365 -out 
myProxyca.pem


Sent from my iPhone

> On Jul 3, 2024, at 10:56, Jonathan Lee  wrote:
> 
> Hello fellow Squid users does anyone know how to fix this issue?
> 
> Squid - Cache Logs
> Date-Time Message
> 31.12.1969 16:00:00   
> 03.07.2024 10:54:34   kick abandoning conn7853 local=192.168.1.1:3128 
> remote=192.168.1.5:49710 FD 89 flags=1
> 31.12.1969 16:00:00   
> 03.07.2024 10:54:29   kick abandoning conn7844 local=192.168.1.1:3128 
> remote=192.168.1.5:49702 FD 81 flags=1
> 03.07.2024 10:54:09   ERROR: failure while accepting a TLS connection on 
> conn7648 local=192.168.1.1:3128 remote=192.168.1.5:49672 FD 44 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:54:09   ERROR: failure while accepting a TLS connection on 
> conn7647 local=192.168.1.1:3128 remote=192.168.1.5:49670 FD 43 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:54:09   ERROR: failure while accepting a TLS connection on 
> conn7646 local=192.168.1.1:3128 remote=192.168.1.5:49668 FD 34 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:53:04   ERROR: failure while accepting a TLS connection on 
> conn7367 local=192.168.1.1:3128 remote=192.168.1.5:49627 FD 22 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:52:47   ERROR: failure while accepting a TLS connection on 
> conn7345 local=192.168.1.1:3128 remote=192.168.1.5:49618 FD 31 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:52:38   ERROR: failure while accepting a TLS connection on 
> conn7340 local=192.168.1.1:3128 remote=192.168.1.5:49616 FD 45 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
> 03.07.2024 10:52:34   ERROR: failure while accepting a TLS connection on 
> conn7316 local=192.168.1.1:3128 remote=192.168.1.5:49609 FD 45 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 31.12.1969 16:00:00   
> 03.07.2024 10:51:55   WARNING: Error Pages Missing Language: en-us
> 31.12.1969 16:00:00   
> 03.07.2024 10:51:55   ERROR: loading file 
> 9;/usr/local/etc/squid/errors/en-us/ERR_ZERO_SIZE_OBJECT': (2) No such file 
> or directory
> 03.07.2024 10:51:44   ERROR: failure while accepting a TLS connection on 
> conn7102 local=192.168.1.1:3128 remote=192.168.1.5:49574 FD 34 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:51:28   ERROR: failure while accepting a TLS connection on 
> conn7071 local=192.168.1.1:3128 remote=192.168.1.5:49568 FD 92 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:50:29   ERROR: failure while accepting a TLS connection on 
> conn6944 local=192.168.1.1:3128 remote=192.168.1.5:49534 FD 101 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
> 03.07.2024 10:49:54   ERROR: failure while accepting a TLS connection on 
> conn6866 local=192.168.1.1:3128 remote=192.168.1.5:49519 FD 31 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:49:38   ERROR: failure while accepting a TLS connection on 
> conn6809 local=192.168.1.1:3128 remote=192.168.1.5:49503 FD 31 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 31.12.1969 16:00:00   
> 03.07.2024 10:49:32   ERROR: system call failure while accepting a TLS 
> connection on conn6794 local=192.168.1.1:3128 remote=192.168.1.5:49496 FD 19 
> flags=1: SQUID_TLS_ERR_ACCEPT+TLS_IO_ERR=5+errno=54
> 03.07.2024 10:49:24   ERROR: failure while accepting a TLS connection on 
> conn6776 local=192.168.1.1:3128 remote=192.168.1.5:49481 FD 137 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
> 03.07.2024 10:48:49   ERROR: failure while accepting a TLS connection on 
> conn6440 local=192.168.1.1:3128 remote=192.168.1.5:49424 FD 16 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
> 03.07.2024 10:48:49   ERROR: failure while accepting a TLS connection on 
> conn6445 local=192.168.1.1:3128 remote=192.168.1.5:49426 FD 34 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:48:22   ERROR: failure while accepting a TLS connection on 
> conn6035 local=192.168.1.1:3128 remote=192.168.1.5:49355 FD 226 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
> 03.07.2024 10:48:09   ERROR: failure while accepting a TLS connection on 
> conn5887 local=192.168.1.1:3128 remote=192.168.1.5:49318 FD 33 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:48:09   ERROR: failure while accepting a TLS connection on 
> conn5875 local=192.168.1.1:3128 remote=192.168.1.5:49312 FD 216 flags=1: 
> SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
> 03.07.2024 10:48:09   ERROR: failure while accepting a TLS connection on 
> conn5876 local=192.168.1.1:3128 remote=192.168.1.5:49314 

[squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-03 Thread Jonathan Lee
Hello fellow Squid users does anyone know how to fix this issue?

Squid - Cache Logs
Date-Time   Message
31.12.1969 16:00:00 
03.07.2024 10:54:34 kick abandoning conn7853 local=192.168.1.1:3128 
remote=192.168.1.5:49710 FD 89 flags=1
31.12.1969 16:00:00 
03.07.2024 10:54:29 kick abandoning conn7844 local=192.168.1.1:3128 
remote=192.168.1.5:49702 FD 81 flags=1
03.07.2024 10:54:09 ERROR: failure while accepting a TLS connection on 
conn7648 local=192.168.1.1:3128 remote=192.168.1.5:49672 FD 44 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09 ERROR: failure while accepting a TLS connection on 
conn7647 local=192.168.1.1:3128 remote=192.168.1.5:49670 FD 43 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:54:09 ERROR: failure while accepting a TLS connection on 
conn7646 local=192.168.1.1:3128 remote=192.168.1.5:49668 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:53:04 ERROR: failure while accepting a TLS connection on 
conn7367 local=192.168.1.1:3128 remote=192.168.1.5:49627 FD 22 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:47 ERROR: failure while accepting a TLS connection on 
conn7345 local=192.168.1.1:3128 remote=192.168.1.5:49618 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:52:38 ERROR: failure while accepting a TLS connection on 
conn7340 local=192.168.1.1:3128 remote=192.168.1.5:49616 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:52:34 ERROR: failure while accepting a TLS connection on 
conn7316 local=192.168.1.1:3128 remote=192.168.1.5:49609 FD 45 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
31.12.1969 16:00:00 
03.07.2024 10:51:55 WARNING: Error Pages Missing Language: en-us
31.12.1969 16:00:00 
03.07.2024 10:51:55 ERROR: loading file 
9;/usr/local/etc/squid/errors/en-us/ERR_ZERO_SIZE_OBJECT': (2) No such file or 
directory
03.07.2024 10:51:44 ERROR: failure while accepting a TLS connection on 
conn7102 local=192.168.1.1:3128 remote=192.168.1.5:49574 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:51:28 ERROR: failure while accepting a TLS connection on 
conn7071 local=192.168.1.1:3128 remote=192.168.1.5:49568 FD 92 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:50:29 ERROR: failure while accepting a TLS connection on 
conn6944 local=192.168.1.1:3128 remote=192.168.1.5:49534 FD 101 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:49:54 ERROR: failure while accepting a TLS connection on 
conn6866 local=192.168.1.1:3128 remote=192.168.1.5:49519 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:49:38 ERROR: failure while accepting a TLS connection on 
conn6809 local=192.168.1.1:3128 remote=192.168.1.5:49503 FD 31 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
31.12.1969 16:00:00 
03.07.2024 10:49:32 ERROR: system call failure while accepting a TLS 
connection on conn6794 local=192.168.1.1:3128 remote=192.168.1.5:49496 FD 19 
flags=1: SQUID_TLS_ERR_ACCEPT+TLS_IO_ERR=5+errno=54
03.07.2024 10:49:24 ERROR: failure while accepting a TLS connection on 
conn6776 local=192.168.1.1:3128 remote=192.168.1.5:49481 FD 137 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:48:49 ERROR: failure while accepting a TLS connection on 
conn6440 local=192.168.1.1:3128 remote=192.168.1.5:49424 FD 16 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:48:49 ERROR: failure while accepting a TLS connection on 
conn6445 local=192.168.1.1:3128 remote=192.168.1.5:49426 FD 34 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:48:22 ERROR: failure while accepting a TLS connection on 
conn6035 local=192.168.1.1:3128 remote=192.168.1.5:49355 FD 226 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:48:09 ERROR: failure while accepting a TLS connection on 
conn5887 local=192.168.1.1:3128 remote=192.168.1.5:49318 FD 33 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:48:09 ERROR: failure while accepting a TLS connection on 
conn5875 local=192.168.1.1:3128 remote=192.168.1.5:49312 FD 216 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:48:09 ERROR: failure while accepting a TLS connection on 
conn5876 local=192.168.1.1:3128 remote=192.168.1.5:49314 FD 217 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1
03.07.2024 10:47:57 ERROR: failure while accepting a TLS connection on 
conn5815 local=192.168.1.1:3128 remote=192.168.1.5:49297 FD 201 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1
03.07.2024 10:47:54 ERROR: failure while accepting a TLS 

Re: [squid-users] Squid Cache 6.9 on Ubuntu 22.04.3 LTS. Not caching large files to disk.

2024-04-12 Thread Jonathan Lee
40.1.250:3128 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> --2024-04-12 15:44:15--  
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> Connecting to 10.40.1.250:3128... connected.
> Proxy request sent, awaiting response... 200 OK
> Length: 1016070144 (969M) [application/x-iso9660-image]
> Saving to: ‘ubuntu-18.04.6-live-server-amd64.iso.1’
> 
> ubuntu-18.04.6-live-server-amd64.iso.1  
> 100%[===>]
>  969.00M  16.0MB/sin 52s
> 
> 2024-04-12 15:45:07 (18.6 MB/s) - ‘ubuntu-18.04.6-live-server-amd64.iso.1’ 
> saved [1016070144/1016070144]
> 
> and the access.log entry looks like this:
> 
> 1712936707.689  52198 10.40.1.2 TCP_MISS/200 1016070508 GET 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso - 
> HIER_DIRECT/185.125.190.40 application/x-iso9660-image
> 
> 
> 3) A subsequent http download of the same file does pull it from cache:
> 
> root@client1 [ /tmp ]# wget -e http_proxy=10.40.1.250:3128 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> --2024-04-12 15:45:23--  
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso
> Connecting to 10.40.1.250:3128... connected.
> Proxy request sent, awaiting response... 200 OK
> Length: 1016070144 (969M) [application/x-iso9660-image]
> Saving to: ‘ubuntu-18.04.6-live-server-amd64.iso.2’
> 
> ubuntu-18.04.6-live-server-amd64.iso.2  
> 100%[===>]
>  969.00M  30.4MB/sin 36s
> 
> 2024-04-12 15:45:58 (27.0 MB/s) - ‘ubuntu-18.04.6-live-server-amd64.iso.2’ 
> saved [1016070144/1016070144]
> 
> and the access.log entry looks like this:
> 
> 1712936758.943  35825 10.40.1.2 TCP_HIT/200 1016070518 GET 
> http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso - 
> HIER_NONE/- application/x-iso9660-image
> 
> 
> I am making progress, I just need to understand where I am going wrong with 
> SSL Bump for https connections. Why it is still tunnelling? If I fix that I 
> think it will cache/pull from cache the https downloads too. #fingerscrossed
> 
> Any suggestions or decent web blogs/etc on how to configure it?
> 
> Have a great weekend,
> 
> Many Thanks
> Pin
> 
> From: Jonathan Lee 
> Sent: 12 April 2024 15:10
> To: PinPin Poola 
> Cc: squid-users@lists.squid-cache.org 
> Subject: Re: [squid-users] Squid Cache 6.9 on Ubuntu 22.04.3 LTS. Not caching 
> large files to disk.
>  
> Do you have a refresh pattern for .ISO to do this. The defaults for the cache 
> does not cache .ISO files, you have to add a custom refresh pattern for it
> 
> Something like this 
> 
> refresh_pattern -i \.(rar|jar|gz|tgz|tar|bz2|iso)(\?|$)   
>   43800 100% 129600# RAR | JAR | GZ | TGZ | TAR | BZ2 | ISO
> 
> ~
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache 6.9 on Ubuntu 22.04.3 LTS. Not caching large files to disk.

2024-04-12 Thread PinPin Poola
ubuntu-18.04.6-live-server-amd64.iso
Connecting to 10.40.1.250:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 1016070144 (969M) [application/x-iso9660-image]
Saving to: ‘ubuntu-18.04.6-live-server-amd64.iso.2’

ubuntu-18.04.6-live-server-amd64.iso.2  
100%[===>]
 969.00M  30.4MB/sin 36s

2024-04-12 15:45:58 (27.0 MB/s) - ‘ubuntu-18.04.6-live-server-amd64.iso.2’ 
saved [1016070144/1016070144]

and the access.log entry looks like this:

1712936758.943  35825 10.40.1.2 TCP_HIT/200 1016070518 GET 
http://releases.ubuntu.com/18.04.6/ubuntu-18.04.6-live-server-amd64.iso - 
HIER_NONE/- application/x-iso9660-image


I am making progress, I just need to understand where I am going wrong with SSL 
Bump for https connections. Why it is still tunnelling? If I fix that I think 
it will cache/pull from cache the https downloads too. #fingerscrossed

Any suggestions or decent web blogs/etc on how to configure it?

Have a great weekend,

Many Thanks
Pin


From: Jonathan Lee 
Sent: 12 April 2024 15:10
To: PinPin Poola 
Cc: squid-users@lists.squid-cache.org 
Subject: Re: [squid-users] Squid Cache 6.9 on Ubuntu 22.04.3 LTS. Not caching 
large files to disk.

Do you have a refresh pattern for .ISO to do this. The defaults for the cache 
does not cache .ISO files, you have to add a custom refresh pattern for it

Something like this

refresh_pattern -i \.(rar|jar|gz|tgz|tar|bz2|iso)(\?|$) 
43800 100% 129600# RAR | JAR | GZ | TGZ | TAR | BZ2 | ISO

~
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Cache 6.9 on Ubuntu 22.04.3 LTS. Not caching large files to disk.

2024-04-12 Thread PinPin Poola
I have moved on a pace since my first message yesterday - thank you all who 
helped. I can now happily download files from clients on my isolated network, 
through my new proxy. #fanfare

However, I would really like to cache any file over 1 GB in size to disk, as 
the same file could get downloaded 100's of time a day by many different 
clients.  The cache can purge/age out after a week or so, or when getting close 
to the 150 GB limit.

I have configured cache_dir as below, but when I download a large 2 GB ISO 
file, I do not see it being cached within the /var/spool/squid directory 
structure and a subsequent download of the same file is no faster; so it is 
coming from Internet source.

My full /etc/squid/squid.conf file looks like this:

acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly 
plugged) machines
acl localnet src 172.16.0.0/12  # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow localnet
http_access deny to_localhost
http_access deny to_linklocal
include /etc/squid/conf.d/*.conf
http_access deny all
http_port 3128
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
shutdown_lifetime 10 seconds
maximum_object_size 35 GB
cache_dir aufs /var/spool/squid 15 16 256 min-size=1073741824
cache_mem 256 MB
maximum_object_size_in_memory 512 KB
cache_replacement_policy heap LFUDA
range_offset_limit -1
quick_abort_min -1 KB


I have plenty of disk space on my root partition:

Filesystem Size  Used Avail Use% Mounted on
tmpfs  2.4G  1.2M  2.4G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  364G  8.3G  341G   3% /
tmpfs   12G   12K   12G   1% /dev/shm
tmpfs  5.0M 0  5.0M   0% /run/lock
/dev/sda2  974M  252M  656M  28% /boot
tmpfs  2.4G  4.0K  2.4G   1% /run/user/1000


I would really appreciate any pointers on what I am doing wrong?

This is a test setup for now; so if there are security/best practice concerns 
about my config, I would like to be aware; but I need to get it working for now.

Many Thanks
Pin


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Jonathan Lee
Thanks for the reply I am using the built in StoreID program however it 
requires the database file so I have it only set to the items in the dynamic 
cache settings custom refresh areas. 

The rewrite program should redirect to pull from the cache right? Only for 
bumped connections and or cab files from Windows that come over as http. 
Squidguard only does URL checks and blocks some items that cause me issues 
mainly doubleclick.net and a couple other invasive sites and or different 
profiles for different devices. 

Everything works however I started to wonder if I am bumping connections for 
some I still would want the Windows refresh patterns to work so I thought if I 
url_rewrite_access deny them that would block the cache from being used also 
right? Of course the splice items I just want them spliced and checked with 
Squirdguard again the error page itself is that not considered a url_rewrite?

That’s what got me confused as I was thinking at the time an invasive container 
could redirect from the cache so I thought that’s why I would set up blocks for 
it however I am now wondering about the refresh items.

Thanks for the reply. Are you the guy that invented phone mail for Amos OS on 
Semens PBX systems and ROLM phones? I did training with you in Texas if that is 
you.

Thanks agin for your reply

Jonathan Lee
Adult Student 

> On Apr 6, 2024, at 20:00, Amos Jeffries  wrote:
> 
> On 5/04/24 17:25, Jonathan Lee wrote:
>>> ssl_bump splice https_login
>>> ssl_bump splice splice_only
>>> ssl_bump splice NoSSLIntercept
>>> ssl_bump bump bump_only markBumped
>>> ssl_bump stare all
>>> acl markedBumped note bumped true
>>> url_rewrite_access deny markedBumped
>> for good hits should the url_rewirte_access deny be splice not bumped 
>> connections ?
>> I feel I mixed this up
> 
> Depends on what the re-write program is doing.
> 
> Ideally no traffic should be re-written by your proxy at all. Every change 
> you make to the protocol(s) as they go throug adds problems to traffic 
> behaviour.
> 
> Since you have squidguard..
> * if it only does ACL checks, that is fine. But ideally those checks would be 
> done by http_access rules instead.
> * if it is actually changing URLs, that is where the problems start and 
> caching is risky.
> 
> If you are re-writing URLs just to improve caching, I recommend using 
> Store-ID feature instead for those URLs. It does a better job of balancing 
> the caching risk vs ratio gains, even though outwardly it can appear to have 
> less HITs.
> 
> 
> HTH
> Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Amos Jeffries


On 6/04/24 11:34, Jonathan Lee wrote:
if (empty($settings['sslproxy_compatibility_mode']) || 
($settings['sslproxy_compatibility_mode'] == 'modern')) {

// Modern cipher suites
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS";

$sslproxy_options .= ",NO_TLSv1";
} else {
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS";

}

Should the RC4  be removed or allowed?

https://github.com/pfsense/FreeBSD-ports/pull/1365 






AFAIK it should be removed. What I was intending to point out was that 
its removal via "!RC4" is likely making the prior "EECDH+aRSA+RC4" 
addition pointless. Sorry if that was not clear.


If you check the TLS handshake and find Squid is working fine without 
advertising "EECDH+aRSA+RC4" it would be a bit simpler/easier to read 
the config by removing that cipher and just relying on the "!RC4".



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Amos Jeffries

On 5/04/24 17:25, Jonathan Lee wrote:

ssl_bump splice https_login
ssl_bump splice splice_only
ssl_bump splice NoSSLIntercept
ssl_bump bump bump_only markBumped
ssl_bump stare all
acl markedBumped note bumped true
url_rewrite_access deny markedBumped


for good hits should the url_rewirte_access deny be splice not bumped 
connections ?


I feel I mixed this up



Depends on what the re-write program is doing.

Ideally no traffic should be re-written by your proxy at all. Every 
change you make to the protocol(s) as they go throug adds problems to 
traffic behaviour.


Since you have squidguard..
 * if it only does ACL checks, that is fine. But ideally those checks 
would be done by http_access rules instead.
 * if it is actually changing URLs, that is where the problems start 
and caching is risky.


If you are re-writing URLs just to improve caching, I recommend using 
Store-ID feature instead for those URLs. It does a better job of 
balancing the caching risk vs ratio gains, even though outwardly it can 
appear to have less HITs.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-05 Thread Jonathan Lee
if (empty($settings['sslproxy_compatibility_mode']) || 
($settings['sslproxy_compatibility_mode'] == 'modern')) {
// Modern cipher suites
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS";
$sslproxy_options .= ",NO_TLSv1";
} else {
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS";
}

Should the RC4  be removed or allowed?

https://github.com/pfsense/FreeBSD-ports/pull/1365



> On Apr 4, 2024, at 18:17, Amos Jeffries  wrote:
> 
> On 4/04/24 17:48, Jonathan Lee wrote:
>> Is there any particular order to squid configuration??
> 
> Yes. 
> 
> 
>> Does this look correct?
> 
> Best way to find out is to run "squid -k parse", which should be done after 
> upgrades as well to identify and fix changes between versions as we improve 
> the output.
> 
> 
>> I actually get allot of hits and it functions amazing, so I wanted to share 
>> this in case I could improve something. Is there any issues with security?
> 
> Yes, the obvious one is "DONT_VERIFY_PEER" disabling TLS security entirely on 
> outbound connections. That particular option will prevent you even being told 
> about suspicious activity regarding TLS.
> 
> Also there are a few weird things in your TLS cipher settings, such as this 
> sequence "  EECDH+aRSA+RC4:...:!RC4 "
> Which as I understand, enables the EECDH with RC4 hash, but also forbids all 
> uses of RC4.
> 
> 
>> I am concerned that an invasive container could become installed in the 
>> cache and data marshal the network card.
> 
> You have a limit of 4 MB for objects allowed to pass through this proxy, 
> exception being objects from domains listed in the "windowsupdate" ACL (not 
> all Windows related) which are allowed up to 512 MB.
> 
> For the general case, any type of file which can store an image of some 
> system is a risk for that type of vulnerability can be cached.
> 
> The place to fix that vulnerability properly is not the cache or Squid. It is 
> the OS permissions allowing non-Squid software access to the cache files 
> and/or directory.
> 
> 
> 
>> Here is my config
>> # This file is automatically generated by pfSense
>> # Do not edit manually !
> 
> Since this file is generated by pfsense there is little that can be done 
> about ordering issues and very hard to tell which of the problems below are 
> due to pfsense and which due toy your settings.
> 
> FWIW, there are no major issues, just some lines not being necessary due to 
> setting things to their default values, or just some blocks already denyign 
> things that are blocked previously.
> 
> 
>> http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
>> dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
>> cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
>> cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
>>  tls-dh=prime256v1:/etc/dh-parameters.2048 
>> options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
>> http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
>> dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
>> cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
>> cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
>>  tls-dh=prime256v1:/etc/dh-parameters.2048 
>> options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
>> https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
>> dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
>> cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
>> cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
>>  tls-dh=prime256v1:/etc/dh-parameters.2048 
>> options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
>> icp_port 0
>> digest_generation off
>> dns_v4_first on
>> pid_filename /var/run/squid/squid.pid
>> cache_effective_user squid
>> cache_effective_group proxy
>> error_default_language en
>> icon_directory 

Re: [squid-users] Squid cache questions

2024-04-04 Thread Amos Jeffries

On 4/04/24 17:48, Jonathan Lee wrote:

Is there any particular order to squid configuration??



Yes. 



Does this look correct?



Best way to find out is to run "squid -k parse", which should be done 
after upgrades as well to identify and fix changes between versions as 
we improve the output.



I actually get allot of hits and it functions amazing, so I wanted to 
share this in case I could improve something. Is there any issues with 
security?


Yes, the obvious one is "DONT_VERIFY_PEER" disabling TLS security 
entirely on outbound connections. That particular option will prevent 
you even being told about suspicious activity regarding TLS.


Also there are a few weird things in your TLS cipher settings, such as 
this sequence "  EECDH+aRSA+RC4:...:!RC4 "
 Which as I understand, enables the EECDH with RC4 hash, but also 
forbids all uses of RC4.



I am concerned that an invasive container could become 
installed in the cache and data marshal the network card.




You have a limit of 4 MB for objects allowed to pass through this proxy, 
exception being objects from domains listed in the "windowsupdate" ACL 
(not all Windows related) which are allowed up to 512 MB.


For the general case, any type of file which can store an image of some 
system is a risk for that type of vulnerability can be cached.


The place to fix that vulnerability properly is not the cache or Squid. 
It is the OS permissions allowing non-Squid software access to the cache 
files and/or directory.





Here is my config

# This file is automatically generated by pfSense
# Do not edit manually !


Since this file is generated by pfsense there is little that can be done 
about ordering issues and very hard to tell which of the problems below 
are due to pfsense and which due toy your settings.


FWIW, there are no major issues, just some lines not being necessary due 
to setting things to their default values, or just some blocks already 
denyign things that are blocked previously.





http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

icp_port 0
digest_generation off
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname 
cache_mgr 
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger
sslcrtd_program /usr/local/libexec/squid/security_file_certgen -s 
/var/squid/lib/ssl_db -M 4MB -b 2048
tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options flags=DONT_VERIFY_PEER
sslcrtd_children 10

logfile_rotate 0
debug_options rotate=0
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27
forwarded_for transparent
httpd_suppress_version_string on
uri_whitespace strip

acl getmethod method GET

acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain 

[squid-users] Squid cache questions

2024-04-03 Thread Jonathan Lee
Is there any particular order to squid configuration??

Does this look correct?

I actually get allot of hits and it functions amazing, so I wanted to share 
this in case I could improve something. Is there any issues with security? I am 
concerned that an invasive container could become installed in the cache and 
data marshal the network card.

Here is my config 

# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

icp_port 0
digest_generation off
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname Lee_Family.home.arpa
cache_mgr jonathanlee...@gmail.com
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger
sslcrtd_program /usr/local/libexec/squid/security_file_certgen -s 
/var/squid/lib/ssl_db -M 4MB -b 2048
tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options flags=DONT_VERIFY_PEER
sslcrtd_children 10

logfile_rotate 0
debug_options rotate=0
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27
forwarded_for transparent
httpd_suppress_version_string on
uri_whitespace strip

acl getmethod method GET

acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl windowsupdate dstdomain dc1-st.ksn.kaspersky-labs.com
acl windowsupdate dstdomain dc1-file.ksn.kaspersky-labs.com
acl windowsupdate dstdomain dc1.ksn.kaspersky-labs.com

acl rewritedoms dstdomain .facebook.com .akamaihd.net .fbcdn.net .google.com 
.static.com .apple.com .oracle.com .sun.com .java.com .adobe.com 
.steamstatic.com .steampowered.com .steamcontent.com .google.com

store_id_program /usr/local/libexec/squid/storeid_file_rewrite 
/var/squid/storeid/storeid_rewrite.txt
store_id_children 10 startup=5 idle=1 concurrency=0
always_direct allow !getmethod
store_id_access deny connect
store_id_access deny !getmethod
store_id_access allow rewritedoms
reload_into_ims on
max_stale 20 years
minimum_expiry_time 0


refresh_pattern -i squid.internal 10080 80% 79900 override-lastmod 
override-expire ignore-reload ignore-no-store ignore-must-revalidate 
ignore-private ignore-auth

#APPLE STUFF
refresh_pattern -i apple.com/..(cab|exe|msi|msu|msf|asf|wmv|wma|dat|zip|dist)$ 
0 80% 43200  refresh-ims


Re: [squid-users] Squid-cache authentication is not working

2023-09-09 Thread Jason Long
Hello,
Thanks again.
You right, I must move the following lines after the authentication lines:

http_access allow localnet
http_access allow localhost
http_access deny all

It worked.



On Sunday, September 10, 2023 at 01:57:32 AM GMT+3:30, Alex Rousskov 
 wrote: 



On 2023-09-09 15:09, Jason Long wrote:

> My Squid-cache server IP is "192.168.1.2".
> I use Mozilla Firefox and set the proxy to "192.168.1.2:3128".
> What information do you need to tell you?

Do you see Firefox requests/transactions reflected in Squid access.log?

Anything in Squid cache.log?

Sorry, I do not know where those logs are on your machine. Typical 
locations include /var/log/ and /usr/local/squid/var/logs

Another thing to check is whether the http_access rules you have added 
are in the right place. If you simply appended those rules to the 
default Squid configuration file, then they will not work (because 
http_access rules above them will be used instead). Default squid.conf 
marks the place where you should insert custom http_access rules: Look 
for an "INSERT YOUR OWN RULE(S) HERE" comment.

You can check this second theory by removing "http_access allow 
auth_users" and leaving just the "http_access deny all" rule that you 
have added earlier. If everything still works, then either Squid does 
not receive these requests at all (i.e. the first theory) or your access 
rules are too low (i.e. this second theory).


HTH,

Alex.


>    On Sat, Sep 9, 2023 at 5:56 PM, Alex Rousskov
>     wrote:
>    On 2023-09-09 09:09, Jason Long wrote:
> 
>      > Hello,
>      > I installed the Squid-cache on Debian 12, then I installed the
>    Apache utils:
>      >
>      > $ sudo apt install apache2-utils
>      >
>      > After it, I did the following steps:
>      >
>      > $ sudo touch /etc/squid/passwd
>      > $ sudo chown proxy /etc/squid/passwd
>      >
>      > Then:
>      >
>      > $ sudo htpasswd /etc/squid/passwd jason
>      >
>      > After it, I opened the "/etc/squid/squid.conf" file and add the
>    following lines to it:
>      >
>      > auth_param basic program /usr/lib/squid/basic_ncsa_auth
>    /etc/squid/passwd
>      > auth_param basic children 5
>      > auth_param basic realm Squid Basic Authentication
>      > auth_param basic credentialsttl 2 hours
>      > acl auth_users proxy_auth REQUIRED
>      > http_access allow auth_users
>      > http_access deny all
>      >
>      >
>      > Finally:
>      > $ sudo systemctl restart squid
>      >
>      > But, on the client machine, I can visit any website without the
>    username and password.
>      > Which part of the configuration is wrong?
> 
> 
> 
>    Many things could go wrong, but I would start from the beginning:
>    Perhaps the client (browser) is not configured to use the proxy? Do you
>    see client transactions reflected in Squid access.log? Anything in
>    Squid
>    cache.log?
> 
>    HTH,
> 
>    Alex.
> 
> 
>    ___
>    squid-users mailing list
>    squid-users@lists.squid-cache.org
>    

>    https://lists.squid-cache.org/listinfo/squid-users
>    
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache authentication is not working

2023-09-09 Thread Alex Rousskov

On 2023-09-09 18:27, Alex Rousskov wrote:

On 2023-09-09 15:09, Jason Long wrote:


My Squid-cache server IP is "192.168.1.2".
I use Mozilla Firefox and set the proxy to "192.168.1.2:3128".
What information do you need to tell you?


Do you see Firefox requests/transactions reflected in Squid access.log?

Anything in Squid cache.log?

Sorry, I do not know where those logs are on your machine. Typical 
locations include /var/log/ and /usr/local/squid/var/logs


Another thing to check is whether the http_access rules you have added 
are in the right place. If you simply appended those rules to the 
default Squid configuration file, then they will not work (because 
http_access rules above them will be used instead). Default squid.conf 
marks the place where you should insert custom http_access rules: Look 
for an "INSERT YOUR OWN RULE(S) HERE" comment.


You can check this second theory by removing "http_access allow 
auth_users" and leaving just the "http_access deny all" rule that you 
have added earlier. If everything still works, then either Squid does 
not receive these requests at all (i.e. the first theory) or your access 
rules are too low (i.e. this second theory).


... or the attempt to reconfigure Squid did not have an effect (e.g., 
your Squid is not using the configuration file you are editing). There 
are other possible problems/explanations as well. I did not mean to 
imply that these theories are the only possible ones.


Alex.



    On Sat, Sep 9, 2023 at 5:56 PM, Alex Rousskov
     wrote:
    On 2023-09-09 09:09, Jason Long wrote:

 > Hello,
 > I installed the Squid-cache on Debian 12, then I installed the
    Apache utils:
 >
 > $ sudo apt install apache2-utils
 >
 > After it, I did the following steps:
 >
 > $ sudo touch /etc/squid/passwd
 > $ sudo chown proxy /etc/squid/passwd
 >
 > Then:
 >
 > $ sudo htpasswd /etc/squid/passwd jason
 >
 > After it, I opened the "/etc/squid/squid.conf" file and add the
    following lines to it:
 >
 > auth_param basic program /usr/lib/squid/basic_ncsa_auth
    /etc/squid/passwd
 > auth_param basic children 5
 > auth_param basic realm Squid Basic Authentication
 > auth_param basic credentialsttl 2 hours
 > acl auth_users proxy_auth REQUIRED
 > http_access allow auth_users
 > http_access deny all
 >
 >
 > Finally:
 > $ sudo systemctl restart squid
 >
 > But, on the client machine, I can visit any website without the
    username and password.
 > Which part of the configuration is wrong?



    Many things could go wrong, but I would start from the beginning:
    Perhaps the client (browser) is not configured to use the proxy? 
Do you

    see client transactions reflected in Squid access.log? Anything in
    Squid
    cache.log?

    HTH,

    Alex.


    ___
    squid-users mailing list
    squid-users@lists.squid-cache.org
    
    https://lists.squid-cache.org/listinfo/squid-users
    



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache authentication is not working

2023-09-09 Thread Alex Rousskov

On 2023-09-09 15:09, Jason Long wrote:


My Squid-cache server IP is "192.168.1.2".
I use Mozilla Firefox and set the proxy to "192.168.1.2:3128".
What information do you need to tell you?


Do you see Firefox requests/transactions reflected in Squid access.log?

Anything in Squid cache.log?

Sorry, I do not know where those logs are on your machine. Typical 
locations include /var/log/ and /usr/local/squid/var/logs


Another thing to check is whether the http_access rules you have added 
are in the right place. If you simply appended those rules to the 
default Squid configuration file, then they will not work (because 
http_access rules above them will be used instead). Default squid.conf 
marks the place where you should insert custom http_access rules: Look 
for an "INSERT YOUR OWN RULE(S) HERE" comment.


You can check this second theory by removing "http_access allow 
auth_users" and leaving just the "http_access deny all" rule that you 
have added earlier. If everything still works, then either Squid does 
not receive these requests at all (i.e. the first theory) or your access 
rules are too low (i.e. this second theory).



HTH,

Alex.



On Sat, Sep 9, 2023 at 5:56 PM, Alex Rousskov
 wrote:
On 2023-09-09 09:09, Jason Long wrote:

 > Hello,
 > I installed the Squid-cache on Debian 12, then I installed the
Apache utils:
 >
 > $ sudo apt install apache2-utils
 >
 > After it, I did the following steps:
 >
 > $ sudo touch /etc/squid/passwd
 > $ sudo chown proxy /etc/squid/passwd
 >
 > Then:
 >
 > $ sudo htpasswd /etc/squid/passwd jason
 >
 > After it, I opened the "/etc/squid/squid.conf" file and add the
following lines to it:
 >
 > auth_param basic program /usr/lib/squid/basic_ncsa_auth
/etc/squid/passwd
 > auth_param basic children 5
 > auth_param basic realm Squid Basic Authentication
 > auth_param basic credentialsttl 2 hours
 > acl auth_users proxy_auth REQUIRED
 > http_access allow auth_users
 > http_access deny all
 >
 >
 > Finally:
 > $ sudo systemctl restart squid
 >
 > But, on the client machine, I can visit any website without the
username and password.
 > Which part of the configuration is wrong?



Many things could go wrong, but I would start from the beginning:
Perhaps the client (browser) is not configured to use the proxy? Do you
see client transactions reflected in Squid access.log? Anything in
Squid
cache.log?

HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org

https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache authentication is not working

2023-09-09 Thread Jason Long
Hi Alex,Thank you so much for your reply.My Squid-cache server IP is 
"192.168.1.2".I use Mozilla Firefox and set the proxy to 
"192.168.1.2:3128".What information do you need to tell you?

Sent from Yahoo Mail on Android 
 
  On Sat, Sep 9, 2023 at 5:56 PM, Alex 
Rousskov wrote:   On 2023-09-09 09:09, Jason 
Long wrote:
> Hello,
> I installed the Squid-cache on Debian 12, then I installed the Apache utils:
> 
> $ sudo apt install apache2-utils
> 
> After it, I did the following steps:
> 
> $ sudo touch /etc/squid/passwd
> $ sudo chown proxy /etc/squid/passwd
> 
> Then:
> 
> $ sudo htpasswd /etc/squid/passwd jason
> 
> After it, I opened the "/etc/squid/squid.conf" file and add the following 
> lines to it:
> 
> auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
> auth_param basic children 5
> auth_param basic realm Squid Basic Authentication
> auth_param basic credentialsttl 2 hours
> acl auth_users proxy_auth REQUIRED
> http_access allow auth_users
> http_access deny all
> 
> 
> Finally:
> $ sudo systemctl restart squid
> 
> But, on the client machine, I can visit any website without the username and 
> password.
> Which part of the configuration is wrong?


Many things could go wrong, but I would start from the beginning: 
Perhaps the client (browser) is not configured to use the proxy? Do you 
see client transactions reflected in Squid access.log? Anything in Squid 
cache.log?

HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache authentication is not working

2023-09-09 Thread Alex Rousskov

On 2023-09-09 09:09, Jason Long wrote:

Hello,
I installed the Squid-cache on Debian 12, then I installed the Apache utils:

$ sudo apt install apache2-utils

After it, I did the following steps:

$ sudo touch /etc/squid/passwd
$ sudo chown proxy /etc/squid/passwd

Then:

$ sudo htpasswd /etc/squid/passwd jason

After it, I opened the "/etc/squid/squid.conf" file and add the following lines 
to it:

auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid Basic Authentication
auth_param basic credentialsttl 2 hours
acl auth_users proxy_auth REQUIRED
http_access allow auth_users
http_access deny all


Finally:
$ sudo systemctl restart squid

But, on the client machine, I can visit any website without the username and 
password.
Which part of the configuration is wrong?



Many things could go wrong, but I would start from the beginning: 
Perhaps the client (browser) is not configured to use the proxy? Do you 
see client transactions reflected in Squid access.log? Anything in Squid 
cache.log?


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-cache authentication is not working

2023-09-09 Thread Jason Long
Hello,
I installed the Squid-cache on Debian 12, then I installed the Apache utils:

$ sudo apt install apache2-utils

After it, I did the following steps:

$ sudo touch /etc/squid/passwd
$ sudo chown proxy /etc/squid/passwd

Then:

$ sudo htpasswd /etc/squid/passwd jason

After it, I opened the "/etc/squid/squid.conf" file and add the following lines 
to it:

auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid Basic Authentication
auth_param basic credentialsttl 2 hours
acl auth_users proxy_auth REQUIRED
http_access allow auth_users
http_access deny all


Finally:
$ sudo systemctl restart squid

But, on the client machine, I can visit any website without the username and 
password.
Which part of the configuration is wrong?

Thank you.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-Cache 5.6 RPMs are out

2022-06-14 Thread ngtech1ltd
Hey Everybody,
 
Since 5.6 was recently published (and not all the masters has yet to pick it 
up) I have built RPM for:
CentOS 7,8
Oracle Enterprise Linux 7,8
Amazon Enterprise Linux 2
 
All of the above includes couple of my personal patches.
Feel free to pick the SRPMS and look at the sources.
 
All The Bests,
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-Cache Zoom is coming

2022-06-11 Thread Eliezer Croitoru
Hey Everyone,

 

I have been working on a series of zoom meetings on Squid-Cache from 0 to
hero.

It's sort of a meet up with a basic agenda.

I would like to find the right time for these meetings and to put up a list
of subjects that I will touch in them.

However I would like to make some registration process for it to be
efficient.

 

Please if you are willing to participate give the thumbs up here in the list
or privately and I will try to put a date for these meetings.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:   ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-24 Thread Eliezer Croitoru
Hey Amos,

I am testing the session helper now for quite some time and it seems that
there is not memory leak and the helpers seems to run pretty stable for the
last days.
I will continue to run the test since it doesn't cause any issues for my
proxy at all, not in performance and not in any helpers crashes.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Thursday, April 14, 2022 07:18
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid-Cache VS PHP, put some things in
perspective

On 13/04/22 10:30, Eliezer Croitoru wrote:
> 
> I am looking for adventurous Squid Users which wants to help me test if 
> PHP 7.4+ still possess the same old 5.x STDIN bugs.
> 


Hi Eliezer, Thanks for taking on a re-investingation.


FTR, the old problem was not stdin itself. The issue was that PHP was 
designed with the fundamental assumption that it was used for scripts 
with very short execution times. Implying that all resources used would 
be freed quickly.

This assumption resulted in scripts (like helpers) which need to run for 
very long times having terrible memory and resource consumption side 
effects. Naturally that effect alone compounds badly when Squid attempts 
to run dozens or hundreds of helpers at once.

Later versions (PHP-3/4/5) that I tested had various attempts at 
internal Zend engine controls to limit the memory problems. Based on the 
same assumption though, so they chose to terminate helpers early. Which 
still causes Squid issues. PHP config settings for that Zend timeout 
were unreliable.


AFAIK, That is where we are with knowledge of PHP vs helper usage. 
PHP-6+ have not had any serious testing to check if the language updates 
have improved either resource usage or the Zend timeout/abort behaviour.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-17 Thread Eliezer Croitoru
OK so I have created a vagrant simple proxy:
https://github.com/elico/squid-php-helper-tests

It works with REDIS and can be tunned for production use.
It's based on Oracle Enterprise Linux 8 and seems to do the job.
Anyone is interested in trying to help testing if PHP is still leaking?

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Thursday, April 14, 2022 07:18
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid-Cache VS PHP, put some things in
perspective

On 13/04/22 10:30, Eliezer Croitoru wrote:
> 
> I am looking for adventurous Squid Users which wants to help me test if 
> PHP 7.4+ still possess the same old 5.x STDIN bugs.
> 


Hi Eliezer, Thanks for taking on a re-investingation.


FTR, the old problem was not stdin itself. The issue was that PHP was 
designed with the fundamental assumption that it was used for scripts 
with very short execution times. Implying that all resources used would 
be freed quickly.

This assumption resulted in scripts (like helpers) which need to run for 
very long times having terrible memory and resource consumption side 
effects. Naturally that effect alone compounds badly when Squid attempts 
to run dozens or hundreds of helpers at once.

Later versions (PHP-3/4/5) that I tested had various attempts at 
internal Zend engine controls to limit the memory problems. Based on the 
same assumption though, so they chose to terminate helpers early. Which 
still causes Squid issues. PHP config settings for that Zend timeout 
were unreliable.


AFAIK, That is where we are with knowledge of PHP vs helper usage. 
PHP-6+ have not had any serious testing to check if the language updates 
have improved either resource usage or the Zend timeout/abort behaviour.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-13 Thread Amos Jeffries

On 13/04/22 10:30, Eliezer Croitoru wrote:


I am looking for adventurous Squid Users which wants to help me test if 
PHP 7.4+ still possess the same old 5.x STDIN bugs.





Hi Eliezer, Thanks for taking on a re-investingation.


FTR, the old problem was not stdin itself. The issue was that PHP was 
designed with the fundamental assumption that it was used for scripts 
with very short execution times. Implying that all resources used would 
be freed quickly.


This assumption resulted in scripts (like helpers) which need to run for 
very long times having terrible memory and resource consumption side 
effects. Naturally that effect alone compounds badly when Squid attempts 
to run dozens or hundreds of helpers at once.


Later versions (PHP-3/4/5) that I tested had various attempts at 
internal Zend engine controls to limit the memory problems. Based on the 
same assumption though, so they chose to terminate helpers early. Which 
still causes Squid issues. PHP config settings for that Zend timeout 
were unreliable.



AFAIK, That is where we are with knowledge of PHP vs helper usage. 
PHP-6+ have not had any serious testing to check if the language updates 
have improved either resource usage or the Zend timeout/abort behaviour.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-12 Thread Eliezer Croitoru
Hey Everybody,

 

Since I know Squid-Cache I remember hearing over and over to not use PHP
however it's a great language.

I had the pleasure of hearing a talk from the creator of PHP and it gave me
couple answers to my doubts and
I wanted to say couple good words about PHP.

 

First the talk is available at the next link:

https://youtu.be/wCZ5TJCBWMg

 

Title: 25 Years of PHP (by the Creator of PHP)

Description: PHP has been around for almost as long as the Web. 25 years!

Join me for a fun look at the highlights (and lowlights) of this crazy trip.
But I will also be trying to convince you to upgrade your PHP version. 

The performance alone should be enough, if not, I have a few other tricks up
my sleeve to try to win you over.

Performance optimization, static analysis, zero-cost profiling, dead code
elimination and escape analysis are just some of the concepts that will be
covered.

 

EVENT:

 

phpday 2019 | Verona, May 10-11th | phpday.it

 

SPEAKER:

 

Rasmus Lerdorf

 

PUBLICATION PERMISSIONS:

 

Original video was published with the Creative Commons license.

## END OF SECTION

 

PHP is a good language if not one of the best languages ever made.

And I can see daily how it gives many parts of the internet and the world to
just work and make the world a better place.
(There are. bad uses for anything good..)

I have been using Squid-Cache for the last 14 ~ years for many things and I
am really not a programmer.

I actually didn't even like to code and I have seen uses of PHP which amazed
me all these years.

For theses who want to run a squid helper with PHP you just need to
understand that PHP was not built for this purpose.

I assume that the availability of PHP helpers examples and the simplicity of
the language technical resources might be the cause of this.

 

I want to run a test of a PHP helper with PHP 7.4 and PHP 8.0 , they both
contains couple amazing improvements but needs to be tested.

The next Skelton:

https://gist.githubusercontent.com/elico/5d1cc6dceebbe7ae8f6cedf158396905/ra
w/1655125419b5063477723f9f1687167afd003665/fake-helper.php

 

Is a fake PHP helper for the tests.

I really recommend on other languages and other ways to implement a helper
solution but if we will be able to test this it's possible
that the conclusions will be more then satisfying to prove if the language
issues were fixed.

 

I need an idea for a testing helper and was thinking about a basic session
helper.

 

I my last take of a session helper what I did was to write the next:

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Conf

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/PhpLoginExample

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Python

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/SplashPageTemplat
e

 

And I have also seen that there are couple examples for the:

readline_callback_handler_install

 

function in PHP which might result in a solution for the problem.

 

I am looking for adventurous Squid Users which wants to help me test if PHP
7.4+ still possess the same old 5.x STDIN bugs.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:   ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2021-03-01 Thread Majed Zouhairy

Thanks for, at least, the explanation

On 3/1/21 6:12 PM, Alex Rousskov wrote:

On 3/1/21 2:07 AM, Majed Zouhairy wrote:

i tried this, but neither the https download bandwidth restriction nor
caching seems to be working as expected


Squid cannot cache HTTP responses without bumping HTTPS traffic. This is
a protocol-level limitation, not a bug.

There are known delay pools bugs for not-bumped (i.e. tunneled or
CONNECT) traffic. IIRC, the pools may work for some tunnels, but the
imposed limits may vary significantly from the configured values.


HTH,

Alex.



acl slower src 10.46.10.78
acl localnet src 10.46.10.0/24

acl SSL_ports port 443
acl Safe_ports port 80    # http
acl Safe_ports port 8080    # http
acl Safe_ports port 21    # ftp
acl Safe_ports port 443    # https
acl Safe_ports port 70    # gopher
acl Safe_ports port 210    # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280    # http-mgmt
acl Safe_ports port 488    # gss-http
acl Safe_ports port 591    # filemaker
acl Safe_ports port 777    # multiling http
acl CONNECT method CONNECT
acl blockfiles urlpath_regex -i "/etc/squid/blocks.files.acl"

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
visible_hostname proxy.lk.sk


delay_pools 1
delay_class 1 3
delay_access 1 allow slower
delay_access 1 deny all
delay_parameters 1 51200/51200 -1/-1 51200/25600

http_access allow localnet
http_access allow localhost



# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 8080

# Uncomment and adjust the following to add a disk cache directory.
# Updates: chrome and acrobat
refresh_pattern -i gvt1.com/.*\.(exe|ms[i|u|f|p]|dat|zip|psf) 43200 80%
129600 reload-into-ims
refresh_pattern -i adobe.com/.*\.(exe|ms[i|u|f|p]|dat|zip|psf) 43200 80%
129600 reload-into-ims



range_offset_limit 200 MB
maximum_object_size 200 MB
quick_abort_min -1

# DONT MODIFY THESE LINES
refresh_pattern \^ftp:   1440    20% 10080
refresh_pattern \^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
refresh_pattern .   0  20% 43200

cache_dir ufs /var/cache/squid 3000 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

cache_mem 1024 MB

netdb_filename none

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:    1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .    0    20%    4320

url_rewrite_program /usr/local/ufdbguard/bin/ufdbgclient -m 4 -l
/var/log/squid/
url_rewrite_children 16 startup=8 idle=2 concurrency=4
#debug_options ALL,1 33,2 28,9


any help?


On 2/26/21 10:22 AM, Majed Zouhairy wrote:


Health be Upon you,

i want to cache certain files, let's say exe, msi... above 20MB and
below 300MB, limit the cache directory to 3GB
i have no ssl bump not configured
version 4.14
how to do that?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2021-03-01 Thread Alex Rousskov
On 3/1/21 2:07 AM, Majed Zouhairy wrote:
> i tried this, but neither the https download bandwidth restriction nor
> caching seems to be working as expected

Squid cannot cache HTTP responses without bumping HTTPS traffic. This is
a protocol-level limitation, not a bug.

There are known delay pools bugs for not-bumped (i.e. tunneled or
CONNECT) traffic. IIRC, the pools may work for some tunnels, but the
imposed limits may vary significantly from the configured values.


HTH,

Alex.


> acl slower src 10.46.10.78
> acl localnet src 10.46.10.0/24
> 
> acl SSL_ports port 443
> acl Safe_ports port 80    # http
> acl Safe_ports port 8080    # http
> acl Safe_ports port 21    # ftp
> acl Safe_ports port 443    # https
> acl Safe_ports port 70    # gopher
> acl Safe_ports port 210    # wais
> acl Safe_ports port 1025-65535    # unregistered ports
> acl Safe_ports port 280    # http-mgmt
> acl Safe_ports port 488    # gss-http
> acl Safe_ports port 591    # filemaker
> acl Safe_ports port 777    # multiling http
> acl CONNECT method CONNECT
> acl blockfiles urlpath_regex -i "/etc/squid/blocks.files.acl"
> 
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
> 
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
> visible_hostname proxy.lk.sk
> 
> 
> delay_pools 1
> delay_class 1 3
> delay_access 1 allow slower
> delay_access 1 deny all
> delay_parameters 1 51200/51200 -1/-1 51200/25600
> 
> http_access allow localnet
> http_access allow localhost
> 
> 
> 
> # And finally deny all other access to this proxy
> http_access deny all
> 
> # Squid normally listens to port 3128
> http_port 8080
> 
> # Uncomment and adjust the following to add a disk cache directory.
> # Updates: chrome and acrobat
> refresh_pattern -i gvt1.com/.*\.(exe|ms[i|u|f|p]|dat|zip|psf) 43200 80%
> 129600 reload-into-ims
> refresh_pattern -i adobe.com/.*\.(exe|ms[i|u|f|p]|dat|zip|psf) 43200 80%
> 129600 reload-into-ims
> 
> 
> 
> range_offset_limit 200 MB
> maximum_object_size 200 MB
> quick_abort_min -1
> 
> # DONT MODIFY THESE LINES
> refresh_pattern \^ftp:   1440    20% 10080
> refresh_pattern \^gopher:    1440    0%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
> refresh_pattern .   0  20% 43200
> 
> cache_dir ufs /var/cache/squid 3000 16 256
> 
> # Leave coredumps in the first cache dir
> coredump_dir /var/cache/squid
> 
> cache_mem 1024 MB
> 
> netdb_filename none
> 
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp:    1440    20%    10080
> refresh_pattern ^gopher:    1440    0%    1440
> refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
> refresh_pattern .    0    20%    4320
> 
> url_rewrite_program /usr/local/ufdbguard/bin/ufdbgclient -m 4 -l
> /var/log/squid/
> url_rewrite_children 16 startup=8 idle=2 concurrency=4
> #debug_options ALL,1 33,2 28,9
> 
> 
> any help?
> 
> 
> On 2/26/21 10:22 AM, Majed Zouhairy wrote:
>>
>> Health be Upon you,
>>
>> i want to cache certain files, let's say exe, msi... above 20MB and
>> below 300MB, limit the cache directory to 3GB
>> i have no ssl bump not configured
>> version 4.14
>> how to do that?
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2021-02-28 Thread Majed Zouhairy
i tried this, but neither the https download bandwidth restriction nor 
caching seems to be working as expected


acl slower src 10.46.10.78
acl localnet src 10.46.10.0/24

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 8080# http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl blockfiles urlpath_regex -i "/etc/squid/blocks.files.acl"

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
visible_hostname proxy.lk.sk


delay_pools 1
delay_class 1 3
delay_access 1 allow slower
delay_access 1 deny all
delay_parameters 1 51200/51200 -1/-1 51200/25600

http_access allow localnet
http_access allow localhost



# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 8080

# Uncomment and adjust the following to add a disk cache directory.
# Updates: chrome and acrobat
refresh_pattern -i gvt1.com/.*\.(exe|ms[i|u|f|p]|dat|zip|psf) 43200 80% 
129600 reload-into-ims
refresh_pattern -i adobe.com/.*\.(exe|ms[i|u|f|p]|dat|zip|psf) 43200 80% 
129600 reload-into-ims




range_offset_limit 200 MB
maximum_object_size 200 MB
quick_abort_min -1

# DONT MODIFY THESE LINES
refresh_pattern \^ftp:   144020% 10080
refresh_pattern \^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
refresh_pattern . 0  20% 43200

cache_dir ufs /var/cache/squid 3000 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

cache_mem 1024 MB

netdb_filename none

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

url_rewrite_program /usr/local/ufdbguard/bin/ufdbgclient -m 4 -l 
/var/log/squid/

url_rewrite_children 16 startup=8 idle=2 concurrency=4
#debug_options ALL,1 33,2 28,9


any help?


On 2/26/21 10:22 AM, Majed Zouhairy wrote:


Health be Upon you,

i want to cache certain files, let's say exe, msi... above 20MB and 
below 300MB, limit the cache directory to 3GB

i have no ssl bump not configured
version 4.14
how to do that?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache

2021-02-25 Thread Majed Zouhairy


Health be Upon you,

i want to cache certain files, let's say exe, msi... above 20MB and 
below 300MB, limit the cache directory to 3GB

i have no ssl bump not configured
version 4.14
how to do that?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache with SSL

2020-05-27 Thread Eliezer Croitoru
Hey Amos, I am not sure I understand the if and what are the risks of this subject.From what I understand until now Google doesn’t use any DH concept on specific keys.I do believe that there is a reason for the obviates ABORT.The client is allowed and in most cases the software decides to ABORT if there is an issue with the given certificate.The most obviates reason for such a case is that the client software tries to peek inside the “given” TLS connections and understand if it’s a good idea to continue with the session conditions. I do agree that forced caching is a very bad idea.However I do believe that there are use cases for such methods… only and only on a dev environment. If Google or any other leaf of the network is trying to cache the ISP or to push into it traffic, the ISP is allowed by law..to do what he needs to do to protect the clients.I am not sure that there is any risk in doing so compared to what google did to the internet. Just a scenario I have in mind:If the world doesn’t really need google to survive like some try to argue,Would an IT specialist give up on google? Ie given a better much more safe alternative? I believe Google is a milestone for humanity, however, if no one understands the risks of the local Databasesand why these database exists and protected in the first place and why they shouldn’t be exposed to the public,there is an opening for these who want to access these Databases. Eliezer Eliezer CroitoruTech SupportMobile: +972-5-28704261Email: ngtech1...@gmail.com From: Amos JeffriesSent: Monday, May 25, 2020 1:02 PMTo: squid-users@lists.squid-cache.orgSubject: Re: [squid-users] Squid cache with SSL On 25/05/20 8:09 pm, Andrey Etush-Koukharenko wrote:> Hello, I'm trying to set up a cache for GCP signed URLs using squid 4.10> I've set ssl_bump:> *http_port 3128 ssl-bump cert=/etc/ssl/squid_ca.pem> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB> > sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db> -M 4MB> > acl step1 at_step SslBump1> > ssl_bump peek step1> ssl_bump bump all* The above SSL-Bump configuration tries to auto-generate servercertificates based only on details in the TLS client handshake. Thisleads to a huge number of problems, not least of which is completelybreaking TLS security properties. Prefer doing the bump at step3,  > *> I've set cache like this:> > *refresh_pattern -i my-dev.storage.googleapis.com/.* > 4320 80% 43200 override-expire ignore-reload ignore-no-store ignore-private*> * FYI: that does not setup the cache. It provides *default* parameters forthe heuristic expiry algorithm. * override-expire replaces the max-age (or Expires header) parameterwith 43200 minutes from object creation.  This often has the effect of forcing objects to expire from cache longbefore they normally would. * ignore-reload makes Squid ignore requests from the client to updateits cached content. This forces content which is stale, outdated, corrupt, or plain wrongto remain in cache no matter how many times clients try to re-fetch fora valid response. * ignore-private makes Squid content that is never supposed to be sharedbetween clients. To prevent personal data being shared between clients who should neversee it Squid will revalidate these objects. Usually different data willreturn, making this just a waste of cache space. * ignore-no-store makes Squid cache objects that are explicitly*forbidden* to be stored in a cache.  80% of 0 seconds == 0 seconds before these objects become stale andexpire from cache. Given that you described this as a problem with an API doing *signing*of things I expect that at least some of those objects will be securitykeys. Possibly generated specifically per-item keys, where forcedcaching is a *BAD* idea. I recommend removing that line entirely from your config file andletting the Google developers instructions do what they are intended todo with the cacheability. At the very least start from the defaultcaching behaviour and see how it works normally before adding protocolviolations and unusual (mis)behvaviours to how the proxy caches things.  > *> In the cache directory, I see that object was stored after the first> call, but when I try to re-run the URL I get always> get: *TCP_REFRESH_UNMODIFIED_ABORTED/200* What makes you think anything is going wrong?  Squid found the object in cache (HIT). The object requirements were to check with the origin server aboutwhether it could still be used (HIT becomes REFRESH). The origin server said it was fine to deliver (UNMODIFIED). Squid started delivery (status 200). The client disconnected before the response could be completed delivery(ABORTED). Clients are allowed to disconnect at any time, for any reason.  Amos___squid-users mailing listsquid-users@lists.squid-cache.orghttp://lists.squid-cache.org/listinfo/squid-users 
__

Re: [squid-users] Squid cache with SSL

2020-05-25 Thread Amos Jeffries
On 25/05/20 8:09 pm, Andrey Etush-Koukharenko wrote:
> Hello, I'm trying to set up a cache for GCP signed URLs using squid 4.10
> I've set ssl_bump:
> *http_port 3128 ssl-bump cert=/etc/ssl/squid_ca.pem
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> 
> sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db
> -M 4MB
> 
> acl step1 at_step SslBump1
> 
> ssl_bump peek step1
> ssl_bump bump all*

The above SSL-Bump configuration tries to auto-generate server
certificates based only on details in the TLS client handshake. This
leads to a huge number of problems, not least of which is completely
breaking TLS security properties.

Prefer doing the bump at step3,


> *
> I've set cache like this:
> 
> *refresh_pattern -i my-dev.storage.googleapis.com/.* 
> 4320 80% 43200 override-expire ignore-reload ignore-no-store ignore-private*
> *

FYI: that does not setup the cache. It provides *default* parameters for
the heuristic expiry algorithm.

* override-expire replaces the max-age (or Expires header) parameter
with 43200 minutes from object creation.
  This often has the effect of forcing objects to expire from cache long
before they normally would.

* ignore-reload makes Squid ignore requests from the client to update
its cached content.
 This forces content which is stale, outdated, corrupt, or plain wrong
to remain in cache no matter how many times clients try to re-fetch for
a valid response.

* ignore-private makes Squid content that is never supposed to be shared
between clients.
 To prevent personal data being shared between clients who should never
see it Squid will revalidate these objects. Usually different data will
return, making this just a waste of cache space.

* ignore-no-store makes Squid cache objects that are explicitly
*forbidden* to be stored in a cache.
  80% of 0 seconds == 0 seconds before these objects become stale and
expire from cache.

Given that you described this as a problem with an API doing *signing*
of things I expect that at least some of those objects will be security
keys. Possibly generated specifically per-item keys, where forced
caching is a *BAD* idea.

I recommend removing that line entirely from your config file and
letting the Google developers instructions do what they are intended to
do with the cacheability. At the very least start from the default
caching behaviour and see how it works normally before adding protocol
violations and unusual (mis)behvaviours to how the proxy caches things.


> *
> In the cache directory, I see that object was stored after the first
> call, but when I try to re-run the URL I get always
> get: *TCP_REFRESH_UNMODIFIED_ABORTED/200*

What makes you think anything is going wrong?

 Squid found the object in cache (HIT).
 The object requirements were to check with the origin server about
whether it could still be used (HIT becomes REFRESH).
 The origin server said it was fine to deliver (UNMODIFIED).
 Squid started delivery (status 200).
 The client disconnected before the response could be completed delivery
(ABORTED).

Clients are allowed to disconnect at any time, for any reason.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid cache with SSL

2020-05-25 Thread Andrey Etush-Koukharenko
Hello, I'm trying to set up a cache for GCP signed URLs using squid 4.10
I've set ssl_bump:







*http_port 3128 ssl-bump cert=/etc/ssl/squid_ca.pem
generate-host-certificates=on
dynamic_cert_mem_cache_size=4MBsslcrtd_program
/usr/lib/squid/security_file_certgen -s /var/lib/ssl_db -M 4MBacl step1
at_step SslBump1ssl_bump peek step1ssl_bump bump all*

I've set cache like this:

*refresh_pattern -i my-dev.storage.googleapis.com/.*

4320 80% 43200 override-expire ignore-reload ignore-no-store ignore-private*

In the cache directory, I see that object was stored after the first call,
but when I try to re-run the URL I get always get:
*TCP_REFRESH_UNMODIFIED_ABORTED/200*

and I get the empty object, I've tried to play with *refresh_pattern *params
but still no luck.

Thanks for your help
Andrey
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-cache proxy which does it all

2020-01-09 Thread robert k Wild
thanks for this Amos, really appreciate it :)

On Thu, 9 Jan 2020 at 19:00, Amos Jeffries  wrote:

> On 9/01/20 8:34 pm, robert k Wild wrote:
> > hi all,
> >
> > I have made a script for squid that installs the following –
> >
> > Squid – http proxy server
> > Squid ssl-bump – https interception for squid
> > C-ICAP – icap server
> > clamAV – AV engine to detect trojan viruses malware etc
> > squidclamav – to make it all integrated with squid
> >
> > what do you think?
> >
> > #!/bin/bash
> > #squid on DMZ host
> > #
> > #first things first lets disable firewalld and SElinux
> > #
> > systemctl stop firewalld
> > systemctl disable firewalld
> > sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
> > #
>
> Why?
>
>
>
> > #squid packages
> > #
> > yum install -y epel-release swaks sed tar zip unzip curl telnet openssl
> > openssl-devel bzip2-devel libarchive libarchive-devel perl
> > perl-Data-Dumper gcc gcc-c++ binutils autoconf automake make sudo wget
> > libxml2-devel libcap-devel libtool-ltdl-devel
> > #
> > #clamAV packages
> > #
> > yum install -y clamav-server clamav-data clamav-update clamav-filesystem
> > clamav clamav-scanner-systemd clamav-devel clamav-lib
> clamav-server-systemd
> > #
> > #download and compile from source
> > #
> > cd /tmp
> > wget http://www.squid-cache.org/Versions/v4/squid-4.9.tar.gz
>
> Please use rsync for this, and verify against the *.asc file signature
> that you got the file correctly.
>
> > wget
> >
> http://sourceforge.net/projects/c-icap/files/c-icap/0.5.x/c_icap-0.5.6.tar.gz
> > wget
> >
> http://sourceforge.net/projects/c-icap/files/c-icap-modules/0.5.x/c_icap_modules-0.5.4.tar.gz
> > wget
> >
> https://sourceforge.net/projects/squidclamav/files/squidclamav/7.1/squidclamav-7.1.tar.gz
> > for f in *.tar.gz; do tar xf "$f"; done
> > cd /tmp/squid-4.9
> > ./configure --with-openssl --enable-ssl-crtd --enable-icap-client &&
> > make && make install
> > #
>
> IIRC this was a CentoOS machine right?
> If so, see 
> otherwise see the equivalent wiki page for your chosen OS compile.
>
> Those settings install Squid as a system application. So no need for the
> /usr/local stuff.
>
>
> > cd /tmp/c_icap-0.5.6
> > ./configure 'CXXFLAGS=-O2 -m64 -pipe' 'CFLAGS=-O2 -m64 -pipe'
> > --without-bdb --prefix=/usr/local && make && make install
> > #
> > cd /tmp/squidclamav-7.1
> > ./configure 'CXXFLAGS=-O2 -m64 -pipe' 'CFLAGS=-O2 -m64 -pipe'
> > --with-c-icap=/usr/local --with-libarchive && make && make install
> > #
> > cd /tmp/c_icap_modules-0.5.4
> > ./configure 'CFLAGS=-O3 -m64 -pipe'
> > 'CPPFLAGS=-I/usr/local/clamav/include' 'LDFLAGS=-L/usr/local/lib
> > -L/usr/local/clamav/lib/' && make && make install
> > #
> > #creating shortcuts and copying files
> > #
> > cp -f /usr/local/squid/etc/squid.conf
> /usr/local/squid/etc/squid.conf.orig
> > cp -f /usr/local/etc/c-icap.conf /usr/local/etc/c-icap.conf.orig
> > cp -f /usr/local/etc/squidclamav.conf
> /usr/local/etc/squidclamav.conf.orig
> > cp -f /usr/local/etc/clamav_mod.conf /usr/local/etc/clamav_mod.conf.orig
> > cp -f /usr/local/etc/virus_scan.conf /usr/local/etc/virus_scan.conf.orig
> > #
> > ln -s /usr/local/squid/etc/squid.conf /etc
> > ln -s /usr/local/etc/c-icap.conf /etc
> > ln -s /usr/local/etc/squidclamav.conf /etc
> > ln -s /usr/local/etc/clamav_mod.conf /etc
> > ln -s /usr/local/etc/virus_scan.conf /etc
> > #
> > mkdir -p /usr/local/clamav/share/clamav
> > ln -s /var/lib/clamav /usr/local/clamav/share/clamav
> > #
> > #tmpfiles for run files
> > #
> > echo "d /var/run/c-icap 0755 root root -" >> /etc/tmpfiles.d/c-icap.conf
> > echo "d /var/run/clamav 0755 root root -" >> /etc/tmpfiles.d/clamav.conf
> > #
> > #delete a few lines in squid
> > #
> > sed -i '/http_port 3128/d' /usr/local/squid/etc/squid.conf
> > sed -i '/http_access deny all/d' /usr/local/squid/etc/squid.conf
>
> Please do not remove that second line from yoru squid.conf. It will
> result in unpredictable default allow/deny behaviour from your proxy.
>
> Instead I recommend (mind the wrap):
>
>  sed -i '/# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR
> CLIENTS/include "/etc/squid/squid.conf.d/*"/'
> /usr/local/squid/etc/squid.conf
>
> Then you can just drop files into the /etc/squid/squid.conf.d/ directory
> and they will be loaded as config on next start or reconfigure.
>
>
>
> HTH
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
Regards,

Robert K Wild.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-cache proxy which does it all

2020-01-09 Thread Amos Jeffries
On 9/01/20 8:34 pm, robert k Wild wrote:
> hi all,
> 
> I have made a script for squid that installs the following –
> 
> Squid – http proxy server
> Squid ssl-bump – https interception for squid
> C-ICAP – icap server
> clamAV – AV engine to detect trojan viruses malware etc
> squidclamav – to make it all integrated with squid
> 
> what do you think?
> 
> #!/bin/bash
> #squid on DMZ host
> #
> #first things first lets disable firewalld and SElinux
> #
> systemctl stop firewalld
> systemctl disable firewalld
> sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
> #

Why?



> #squid packages
> #
> yum install -y epel-release swaks sed tar zip unzip curl telnet openssl
> openssl-devel bzip2-devel libarchive libarchive-devel perl
> perl-Data-Dumper gcc gcc-c++ binutils autoconf automake make sudo wget
> libxml2-devel libcap-devel libtool-ltdl-devel
> #
> #clamAV packages
> #
> yum install -y clamav-server clamav-data clamav-update clamav-filesystem
> clamav clamav-scanner-systemd clamav-devel clamav-lib clamav-server-systemd
> #
> #download and compile from source
> #
> cd /tmp
> wget http://www.squid-cache.org/Versions/v4/squid-4.9.tar.gz

Please use rsync for this, and verify against the *.asc file signature
that you got the file correctly.

> wget
> http://sourceforge.net/projects/c-icap/files/c-icap/0.5.x/c_icap-0.5.6.tar.gz
> wget
> http://sourceforge.net/projects/c-icap/files/c-icap-modules/0.5.x/c_icap_modules-0.5.4.tar.gz
> wget
> https://sourceforge.net/projects/squidclamav/files/squidclamav/7.1/squidclamav-7.1.tar.gz
> for f in *.tar.gz; do tar xf "$f"; done
> cd /tmp/squid-4.9
> ./configure --with-openssl --enable-ssl-crtd --enable-icap-client &&
> make && make install
> #

IIRC this was a CentoOS machine right?
If so, see 
otherwise see the equivalent wiki page for your chosen OS compile.

Those settings install Squid as a system application. So no need for the
/usr/local stuff.


> cd /tmp/c_icap-0.5.6
> ./configure 'CXXFLAGS=-O2 -m64 -pipe' 'CFLAGS=-O2 -m64 -pipe'
> --without-bdb --prefix=/usr/local && make && make install
> #
> cd /tmp/squidclamav-7.1
> ./configure 'CXXFLAGS=-O2 -m64 -pipe' 'CFLAGS=-O2 -m64 -pipe'
> --with-c-icap=/usr/local --with-libarchive && make && make install
> #
> cd /tmp/c_icap_modules-0.5.4
> ./configure 'CFLAGS=-O3 -m64 -pipe'
> 'CPPFLAGS=-I/usr/local/clamav/include' 'LDFLAGS=-L/usr/local/lib
> -L/usr/local/clamav/lib/' && make && make install
> #
> #creating shortcuts and copying files
> #
> cp -f /usr/local/squid/etc/squid.conf /usr/local/squid/etc/squid.conf.orig
> cp -f /usr/local/etc/c-icap.conf /usr/local/etc/c-icap.conf.orig
> cp -f /usr/local/etc/squidclamav.conf /usr/local/etc/squidclamav.conf.orig
> cp -f /usr/local/etc/clamav_mod.conf /usr/local/etc/clamav_mod.conf.orig
> cp -f /usr/local/etc/virus_scan.conf /usr/local/etc/virus_scan.conf.orig
> #
> ln -s /usr/local/squid/etc/squid.conf /etc
> ln -s /usr/local/etc/c-icap.conf /etc
> ln -s /usr/local/etc/squidclamav.conf /etc
> ln -s /usr/local/etc/clamav_mod.conf /etc
> ln -s /usr/local/etc/virus_scan.conf /etc
> #
> mkdir -p /usr/local/clamav/share/clamav
> ln -s /var/lib/clamav /usr/local/clamav/share/clamav
> #
> #tmpfiles for run files
> #
> echo "d /var/run/c-icap 0755 root root -" >> /etc/tmpfiles.d/c-icap.conf
> echo "d /var/run/clamav 0755 root root -" >> /etc/tmpfiles.d/clamav.conf
> #
> #delete a few lines in squid
> #
> sed -i '/http_port 3128/d' /usr/local/squid/etc/squid.conf
> sed -i '/http_access deny all/d' /usr/local/squid/etc/squid.conf

Please do not remove that second line from yoru squid.conf. It will
result in unpredictable default allow/deny behaviour from your proxy.

Instead I recommend (mind the wrap):

 sed -i '/# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR
CLIENTS/include "/etc/squid/squid.conf.d/*"/'
/usr/local/squid/etc/squid.conf

Then you can just drop files into the /etc/squid/squid.conf.d/ directory
and they will be loaded as config on next start or reconfigure.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid-cache proxy which does it all

2020-01-08 Thread robert k Wild
hi all,

I have made a script for squid that installs the following –

Squid – http proxy server
Squid ssl-bump – https interception for squid
C-ICAP – icap server
clamAV – AV engine to detect trojan viruses malware etc
squidclamav – to make it all integrated with squid

what do you think?

#!/bin/bash
#squid on DMZ host
#
#first things first lets disable firewalld and SElinux
#
systemctl stop firewalld
systemctl disable firewalld
sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#
#squid packages
#
yum install -y epel-release swaks sed tar zip unzip curl telnet openssl
openssl-devel bzip2-devel libarchive libarchive-devel perl perl-Data-Dumper
gcc gcc-c++ binutils autoconf automake make sudo wget libxml2-devel
libcap-devel libtool-ltdl-devel
#
#clamAV packages
#
yum install -y clamav-server clamav-data clamav-update clamav-filesystem
clamav clamav-scanner-systemd clamav-devel clamav-lib clamav-server-systemd
#
#download and compile from source
#
cd /tmp
wget http://www.squid-cache.org/Versions/v4/squid-4.9.tar.gz
wget
http://sourceforge.net/projects/c-icap/files/c-icap/0.5.x/c_icap-0.5.6.tar.gz
wget
http://sourceforge.net/projects/c-icap/files/c-icap-modules/0.5.x/c_icap_modules-0.5.4.tar.gz
wget
https://sourceforge.net/projects/squidclamav/files/squidclamav/7.1/squidclamav-7.1.tar.gz
for f in *.tar.gz; do tar xf "$f"; done
cd /tmp/squid-4.9
./configure --with-openssl --enable-ssl-crtd --enable-icap-client && make
&& make install
#
cd /tmp/c_icap-0.5.6
./configure 'CXXFLAGS=-O2 -m64 -pipe' 'CFLAGS=-O2 -m64 -pipe' --without-bdb
--prefix=/usr/local && make && make install
#
cd /tmp/squidclamav-7.1
./configure 'CXXFLAGS=-O2 -m64 -pipe' 'CFLAGS=-O2 -m64 -pipe'
--with-c-icap=/usr/local --with-libarchive && make && make install
#
cd /tmp/c_icap_modules-0.5.4
./configure 'CFLAGS=-O3 -m64 -pipe' 'CPPFLAGS=-I/usr/local/clamav/include'
'LDFLAGS=-L/usr/local/lib -L/usr/local/clamav/lib/' && make && make install
#
#creating shortcuts and copying files
#
cp -f /usr/local/squid/etc/squid.conf /usr/local/squid/etc/squid.conf.orig
cp -f /usr/local/etc/c-icap.conf /usr/local/etc/c-icap.conf.orig
cp -f /usr/local/etc/squidclamav.conf /usr/local/etc/squidclamav.conf.orig
cp -f /usr/local/etc/clamav_mod.conf /usr/local/etc/clamav_mod.conf.orig
cp -f /usr/local/etc/virus_scan.conf /usr/local/etc/virus_scan.conf.orig
#
ln -s /usr/local/squid/etc/squid.conf /etc
ln -s /usr/local/etc/c-icap.conf /etc
ln -s /usr/local/etc/squidclamav.conf /etc
ln -s /usr/local/etc/clamav_mod.conf /etc
ln -s /usr/local/etc/virus_scan.conf /etc
#
mkdir -p /usr/local/clamav/share/clamav
ln -s /var/lib/clamav /usr/local/clamav/share/clamav
#
#tmpfiles for run files
#
echo "d /var/run/c-icap 0755 root root -" >> /etc/tmpfiles.d/c-icap.conf
echo "d /var/run/clamav 0755 root root -" >> /etc/tmpfiles.d/clamav.conf
#
#delete a few lines in squid
#
sed -i '/http_port 3128/d' /usr/local/squid/etc/squid.conf
sed -i '/http_access deny all/d' /usr/local/squid/etc/squid.conf
#
#whitelist in squid
#
sed -i '50i#HTTP_HTTPS whitelist websites' /usr/local/squid/etc/squid.conf
sed -i '51iacl whitelist ssl::server_name
"/usr/local/squid/etc/urlwhite.txt"' /usr/local/squid/etc/squid.conf
sed -i '52ihttp_access allow whitelist' /usr/local/squid/etc/squid.conf
sed -i '53ihttp_access deny all' /usr/local/squid/etc/squid.conf
echo "#Microsoft" >> /usr/local/squid/etc/urlwhite.txt
echo ".bing.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".msn.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".msedge.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".msftauth.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".msauth.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".msocdn.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".outlook.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".onedrive.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".office.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".office.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".office365.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".microsoft.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".microsoftonline.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".live.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".live.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".akamaized.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".akamaihd.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".svc.ms" >> /usr/local/squid/etc/urlwhite.txt
echo ".lync.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".skype.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".gfx.ms" >> /usr/local/squid/etc/urlwhite.txt
echo ".sharepoint.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".sharepointonline.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".windowsupdate.com" >> /usr/local/squid/etc/urlwhite.txt
echo ".windows.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".edgesuite.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".a-msedge.net" >> /usr/local/squid/etc/urlwhite.txt
echo ".akamaiedge.net" >> 

Re: [squid-users] Squid Cache Problem

2019-07-25 Thread Devilindisguise
Great, thank you.

We'll take a look at the DNS cache and see what we find.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Problem

2019-07-25 Thread Matus UHLAR - fantomas

On 25.07.19 00:41, Devilindisguise wrote:

We have what is probably an easy one. Some Windows servers use a locally
installed Squid proxy instance for all outbound traffic. These servers also
make use of some F5 GTM (DNS) servers to provide a resilient inter-DC DNS
topology.

Essentially what should happen is under steady state conditions any DNS
request should be given IP address a.a.a.a, then under failure be given
b.b.b.b. The GTM DNS TTL is 30 seconds.

What we’re finding is that even after 5 mins of failure any HTTP request
from IE (configured with the Squid proxy) still targets a.a.a.a and traffic
is dropped. During this period if we remove the Squid proxy from the IE
settings, it works as now we target b.b.b.b.

So clearly some sort of caching, possibly DNS, is being done on the Squid.


One of main points of DNS design is to be cacheable.
That is why DNS is not suited for load balancing and failover switching.

however, you should be able to look at content of DNS cache in squid using
cachemgr.cgi to see what's wrong there.

also, you can sniff the DNS traffic to see if only proper responses are
going to squid.
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Cache Problem

2019-07-24 Thread Devilindisguise
Hello all

Let me preface this by stating I am far from being a Squid expert so please
bear with me.

We have what is probably an easy one. Some Windows servers use a locally
installed Squid proxy instance for all outbound traffic. These servers also
make use of some F5 GTM (DNS) servers to provide a resilient inter-DC DNS
topology.

Essentially what should happen is under steady state conditions any DNS
request should be given IP address a.a.a.a, then under failure be given
b.b.b.b. The GTM DNS TTL is 30 seconds.

What we’re finding is that even after 5 mins of failure any HTTP request
from IE (configured with the Squid proxy) still targets a.a.a.a and traffic
is dropped. During this period if we remove the Squid proxy from the IE
settings, it works as now we target b.b.b.b. 

So clearly some sort of caching, possibly DNS, is being done on the Squid. 

Where is a good place to start on Squid to troubleshoot this,

Thank you 



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-20 Thread Antony Stone
On Saturday 20 October 2018 at 16:53:12, Mujtaba Hassan Madani wrote:

> Hi,
> 
> now it works through URL
> 
> http://196.202.134.253:3128/squid-internal-mgr/info instead of
> http://proxy.com:3128/squid-internal-mgr/info

Yes, that is because proxy,com does not belong to you - it points to someone 
else's machine, not *your* Squid server.

And, I repeat (especially now that this IP address has been discussed on a 
public mailing list) you URGENTLY need to review your firewall rules on that 
machine, because Squid is accepting requests from anyone.

It is even *proxying web pages* requested by *anyone*.

If you do not understand what I am telling you, or if you do not know how to 
change your firewall rules on that machine, *stop running Squid on it NOW* and 
do not turn it on again until you know how to secure it.

PS: Please do NOT set "Reply-to" on list emails.


Regards,


Antony.

-- 
"In fact I wanted to be John Cleese and it took me some time to realise that 
the job was already taken."

 - Douglas Adams

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-20 Thread Mujtaba Hassan Madani
Hi,

now it works through URL

http://196.202.134.253:3128/squid-internal-mgr/info instead of 
http://proxy.com:3128/squid-internal-mgr/info





From: squid-users  on behalf of 
Antony Stone 
Sent: Saturday, October 20, 2018 2:08:32 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On Saturday 20 October 2018 at 15:59:36, Mujtaba Hassan Madani wrote:

> Hi Antony,
>
>   this is the first IP Connection to 34.194.132.99 failed.

That is the address which "proxy,com" resolves to on my machine too.

> the IP of my server is 196.202.134.253

So, does this give you any clues as to why trying to connect to "proxy.com"
fails to connect to your Squid server?

> i got the instruction of changing of from below default setting form URL

I do not see anywhere in those instructions where it tells you to change this.

> http_access allow localhost manager
> http_access deny manager  so i can login through
> browser to the cache manager
>
> https://wiki.squid-cache.org/action/show/Features/CacheManager?action=show;
> redirect=SquidFaq%2FCacheManager

What does http://196.202.134.253:3128/squid-internal-mgr/info tell you?

Rather worryingly, it works for me from here - you URGENTLY need to review
your firewall rules on a Squid server on a public IP address which accepts
connections from anyone.


PS: Please do NOT set "Reply-to" on list emails.


Regards,


Antony.

--
Neurotics build castles in the sky;
Psychotics live in them;
Psychiatrists collect the rent.


   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-20 Thread Antony Stone
On Saturday 20 October 2018 at 15:59:36, Mujtaba Hassan Madani wrote:

> Hi Antony,
> 
>   this is the first IP Connection to 34.194.132.99 failed.

That is the address which "proxy,com" resolves to on my machine too.

> the IP of my server is 196.202.134.253

So, does this give you any clues as to why trying to connect to "proxy.com" 
fails to connect to your Squid server?

> i got the instruction of changing of from below default setting form URL

I do not see anywhere in those instructions where it tells you to change this.

> http_access allow localhost manager
> http_access deny manager  so i can login through
> browser to the cache manager
> 
> https://wiki.squid-cache.org/action/show/Features/CacheManager?action=show;
> redirect=SquidFaq%2FCacheManager

What does http://196.202.134.253:3128/squid-internal-mgr/info tell you?

Rather worryingly, it works for me from here - you URGENTLY need to review 
your firewall rules on a Squid server on a public IP address which accepts 
connections from anyone.


PS: Please do NOT set "Reply-to" on list emails.


Regards,


Antony.

-- 
Neurotics build castles in the sky;
Psychotics live in them;
Psychiatrists collect the rent.


   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-20 Thread Mujtaba Hassan Madani
Hi Antony,

  this is the first IP Connection to 34.194.132.99 failed.

the IP of my server is 196.202.134.253

i got the instruction of changing of from below default setting form URL

http_access allow localhost manager
http_access deny manager  so i can login through 
browser to the cache manager

https://wiki.squid-cache.org/action/show/Features/CacheManager?action=show=SquidFaq%2FCacheManager

regards


Mujtaba H,


From: squid-users  on behalf of 
Antony Stone 
Sent: Saturday, October 20, 2018 1:26:59 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On Saturday 20 October 2018 at 14:56:33, Mujtaba Hassan Madani wrote:

> Hi Amos,
>
>I get attached message when trying to access cache manger through web
> interface below is my full URL
>
> http://proxy.com:3128/squid-internal-mgr/info

1. What IP address does "proxy.com" resolve to on your network?

2. What is the IP address of your Squid server?

> according to squid.org feedback
>
>  https://wiki.squid-cache.org/action/show/Features/CacheManager?action=show
> =SquidFaq%2FCacheManager
>
> i should change the following command in the squid.config file i already
> did but with no avail
>
> http_access allow localhost manager
> http_access deny manager

The above is the default, and is correct.  Where in the above instructions do
you believe it tells you to change this?

> i replaced deny manager by allow manager. please advice.

I advise changing it back again.

Please let us know about the two IP addresses I asked about above.


Regards,


Antony.

--
BASIC is to computer languages what Roman numerals are to arithmetic.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-20 Thread Mujtaba Hassan Madani
Hi Amos,

   I get attached message when trying to access cache manger through web 
interface below is my full URL

http://proxy.com:3128/squid-internal-mgr/info

according to squid.org feedback

 
https://wiki.squid-cache.org/action/show/Features/CacheManager?action=show=SquidFaq%2FCacheManager

i should change the following command in the squid.config file i already did 
but with no avail

http_access allow localhost manager
http_access deny manager

i replaced deny manager by allow manager. please advice.

regards


Mujtaba H


From: Amos Jeffries 
Sent: Saturday, October 13, 2018 7:58:34 AM
To: Mujtaba Hassan Madani; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On 12/10/18 7:55 AM, Mujtaba   Hassan Madani wrote:
> Hi Amos,
>
> I have change my domain name to proxy instead of that long one per
> your advice i was wondering where to get information about my current
> caching files and it's size ? i login to
> http://proxy:3128/squid-internal-mgr/info for that but with no success
> attached is web respond. please advise
>

The proxy hostname "proxy:3128" does need to resolve in DNS to access it
this way. That is what the browser is complaining about.

Alternatively you maybe can use the Linux/BSD command line tool on the
proxy machine itself:
   squidclient mgr:info

(but given this seems to be a NAS situation it may not be installed there).


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-13 Thread Amos Jeffries
On 12/10/18 7:55 AM, Mujtaba   Hassan Madani wrote:
> Hi Amos,
> 
>     I have change my domain name to proxy instead of that long one per
> your advice i was wondering where to get information about my current
> caching files and it's size ? i login to
> http://proxy:3128/squid-internal-mgr/info for that but with no success
> attached is web respond. please advise 
> 

The proxy hostname "proxy:3128" does need to resolve in DNS to access it
this way. That is what the browser is complaining about.

Alternatively you maybe can use the Linux/BSD command line tool on the
proxy machine itself:
   squidclient mgr:info

(but given this seems to be a NAS situation it may not be installed there).


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-06 Thread Mujtaba Hassan Madani
if iam not mistaken then i need to change my proxy server host name to simpler 
one and log in by below URL to check the info management about caching files 
and it relevant size

http://sudasat-hp-compaq-dc7600-convertible-minitower.local:3128/squid-internal-mgr/info

am i right ?


From: squid-users  on behalf of Amos 
Jeffries 
Sent: Saturday, October 6, 2018 4:41:23 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On 6/10/18 6:22 AM, Mujtaba   Hassan Madani wrote:
> Hi,
>
>   I also get the attached error while been trying to access through web
> interface for private IP address. I managed to add the IP on the bypass
> proxy server for the local address but, i cant do for all since I have
> many private IP i use to login to
>

The "error" says you do not have any web server running at the IP
address 172.17.2.1 port 80.

Or if there is, the network does not permit Squid to make TCP
connections to it.


If you mean the *Squid* built-in web interface (aka manager reports).
You are missing the public hostname of the proxy, the forward-proxy port
and the report name (path) being fetched.
 
http://sudasat-hp-compaq-dc7600-convertible-minitower.local:3128/squid-internal-mgr/info

(you really should find a simpler domain name for your proxy to use,
that is quite a long name).

> regards
>
>
> Mujtaba H
>
> 
> *From:* Mujtaba Hassan Madani 
> *Sent:* Friday, October 5, 2018 2:59:09 PM
>
>
> Hi Amos,
>
>I have my squid service running now on ubuntu desktop version 16.0
> the path is /etc/squid/squid.config for the configuration file.

Okay. I'm not sure what relevance the location of the squid.conf file
has. Usually the settings inside it are needed to answer questions about
your proxies behaviour. But there are no such questions so far in this
thread.


> my
> question is where can i check the cached file and traffic volume in KB
> that been cached ?

The "info" management report from Squid will tell you that.

The URL for that will be like:
  http://your-proxies-domain-name:3128/squid-internal-mgr/info


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-05 Thread Amos Jeffries
On 6/10/18 6:22 AM, Mujtaba   Hassan Madani wrote:
> Hi,
> 
>   I also get the attached error while been trying to access through web
> interface for private IP address. I managed to add the IP on the bypass
> proxy server for the local address but, i cant do for all since I have
> many private IP i use to login to 
> 

The "error" says you do not have any web server running at the IP
address 172.17.2.1 port 80.

Or if there is, the network does not permit Squid to make TCP
connections to it.


If you mean the *Squid* built-in web interface (aka manager reports).
You are missing the public hostname of the proxy, the forward-proxy port
and the report name (path) being fetched.
 
http://sudasat-hp-compaq-dc7600-convertible-minitower.local:3128/squid-internal-mgr/info

(you really should find a simpler domain name for your proxy to use,
that is quite a long name).

> regards
> 
> 
> Mujtaba H
> 
> 
> *From:* Mujtaba Hassan Madani 
> *Sent:* Friday, October 5, 2018 2:59:09 PM
>  
> 
> Hi Amos,
> 
>    I have my squid service running now on ubuntu desktop version 16.0
> the path is /etc/squid/squid.config for the configuration file.

Okay. I'm not sure what relevance the location of the squid.conf file
has. Usually the settings inside it are needed to answer questions about
your proxies behaviour. But there are no such questions so far in this
thread.


> my
> question is where can i check the cached file and traffic volume in KB
> that been cached ?

The "info" management report from Squid will tell you that.

The URL for that will be like:
  http://your-proxies-domain-name:3128/squid-internal-mgr/info


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-10-05 Thread Mujtaba Hassan Madani
Hi Amos,

   I have my squid service running now on ubuntu desktop version 16.0 the path 
is /etc/squid/squid.config for the configuration file. my question is where can 
i check the cached file and traffic volume in KB that been cached ?


From: squid-users  on behalf of 
Mujtaba Hassan Madani 
Sent: Wednesday, September 19, 2018 1:46:44 PM
To: Amos Jeffries; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server


Hi Amos,

 thanks for your concern, as I informed you Iam looking to install Squid on 
Ubuntu Linux server for Caching purpose once I kickoff i will notify you to 
have your assistant.

regards


Mujtaba H,


From: squid-users  on behalf of Amos 
Jeffries 
Sent: Sunday, September 16, 2018 4:58:37 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On 15/09/18 12:13 PM, Mujtaba   Hassan Madani wrote:
> Hi Amos,
>
>you did not get back to me about my below concern
>

I responded to your concern about copyright.

I do not see anything else in your messages as expressing a concern to
be responded to.

Amos


> 
> *From:* Mujtaba Hassan Madani
> *Sent:* Thursday, September 13, 2018 5:36:48 PM
>
>
> Hi Amos,
>
>Iam looking for building a Squid proxy server on Ubuntu for my LAN
> serving up to 25 PC's I just want the maximum potential of the server
> capability to enhance the network performance and gain better users
> expectation of the service.
>

> 
> *From:* Amos Jeffries
> *Sent:* Wednesday, September 12, 2018 2:54:37 PM
>
> On 13/09/18 2:16 AM, Mujtaba   Hassan Madani wrote:
>> Dear Squid Team,
>>
>>  how does content provider prevent it from been cached while passing
>> through squid proxy it's by a copy right law
>
> No. Contents which can be transferred through a proxy are implicitly
> licensed for re-distribution.
>
> Legal issues are usually encountered only around interception or
> modification of content.
>
>
>> or some  encryption is
>
> Sometimes.
>
>> implemented in the traffic ?
>
> and other features built into HTTP protocol.
>
>
>> and where can I find the contents that been
>> cached on my squid proxy ?
>>
>
> Depends on your config. Usually in the machine RAM.
>
> What are you looking for exactly? and why?
>

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-19 Thread Mujtaba Hassan Madani
Hi Amos,

 thanks for your concern, as I informed you Iam looking to install Squid on 
Ubuntu Linux server for Caching purpose once I kickoff i will notify you to 
have your assistant.

regards


Mujtaba H,


From: squid-users  on behalf of Amos 
Jeffries 
Sent: Sunday, September 16, 2018 4:58:37 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On 15/09/18 12:13 PM, Mujtaba   Hassan Madani wrote:
> Hi Amos,
>
>you did not get back to me about my below concern
>

I responded to your concern about copyright.

I do not see anything else in your messages as expressing a concern to
be responded to.

Amos


> 
> *From:* Mujtaba Hassan Madani
> *Sent:* Thursday, September 13, 2018 5:36:48 PM
>
>
> Hi Amos,
>
>Iam looking for building a Squid proxy server on Ubuntu for my LAN
> serving up to 25 PC's I just want the maximum potential of the server
> capability to enhance the network performance and gain better users
> expectation of the service.
>

> 
> *From:* Amos Jeffries
> *Sent:* Wednesday, September 12, 2018 2:54:37 PM
>
> On 13/09/18 2:16 AM, Mujtaba   Hassan Madani wrote:
>> Dear Squid Team,
>>
>>  how does content provider prevent it from been cached while passing
>> through squid proxy it's by a copy right law
>
> No. Contents which can be transferred through a proxy are implicitly
> licensed for re-distribution.
>
> Legal issues are usually encountered only around interception or
> modification of content.
>
>
>> or some  encryption is
>
> Sometimes.
>
>> implemented in the traffic ?
>
> and other features built into HTTP protocol.
>
>
>> and where can I find the contents that been
>> cached on my squid proxy ?
>>
>
> Depends on your config. Usually in the machine RAM.
>
> What are you looking for exactly? and why?
>

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-16 Thread Amos Jeffries
On 15/09/18 12:13 PM, Mujtaba   Hassan Madani wrote:
> Hi Amos,
> 
>    you did not get back to me about my below concern 
> 

I responded to your concern about copyright.

I do not see anything else in your messages as expressing a concern to
be responded to.

Amos


> 
> *From:* Mujtaba Hassan Madani
> *Sent:* Thursday, September 13, 2018 5:36:48 PM
>  
> 
> Hi Amos,
> 
>    Iam looking for building a Squid proxy server on Ubuntu for my LAN
> serving up to 25 PC's I just want the maximum potential of the server
> capability to enhance the network performance and gain better users
> expectation of the service.
> 

> 
> *From:* Amos Jeffries
> *Sent:* Wednesday, September 12, 2018 2:54:37 PM
>  
> On 13/09/18 2:16 AM, Mujtaba   Hassan Madani wrote:
>> Dear Squid Team,
>> 
>>      how does content provider prevent it from been cached while passing
>> through squid proxy it's by a copy right law
> 
> No. Contents which can be transferred through a proxy are implicitly
> licensed for re-distribution.
> 
> Legal issues are usually encountered only around interception or
> modification of content.
> 
> 
>> or some  encryption is
> 
> Sometimes.
> 
>> implemented in the traffic ?
> 
> and other features built into HTTP protocol.
> 
> 
>> and where can I find the contents that been
>> cached on my squid proxy ?
>> 
> 
> Depends on your config. Usually in the machine RAM.
> 
> What are you looking for exactly? and why?
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-14 Thread Mujtaba Hassan Madani
Hi Amos,

   you did not get back to me about my below concern

Regards


Mujtaba H,




From: Mujtaba Hassan Madani 
Sent: Thursday, September 13, 2018 5:36:48 PM
To: Amos Jeffries; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server


Hi Amos,

   Iam looking for building a Squid proxy server on Ubuntu for my LAN serving 
up to 25 PC's I just want the maximum potential of the server capability to 
enhance the network performance and gain better users expectation of the 
service.

regards



Mujtaba H,


From: squid-users  on behalf of Amos 
Jeffries 
Sent: Wednesday, September 12, 2018 2:54:37 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On 13/09/18 2:16 AM, Mujtaba   Hassan Madani wrote:
> Dear Squid Team,
>
>  how does content provider prevent it from been cached while passing
> through squid proxy it's by a copy right law

No. Contents which can be transferred through a proxy are implicitly
licensed for re-distribution.

Legal issues are usually encountered only around interception or
modification of content.


> or some  encryption is

Sometimes.

> implemented in the traffic ?

and other features built into HTTP protocol.


> and where can I find the contents that been
> cached on my squid proxy ?
>

Depends on your config. Usually in the machine RAM.

What are you looking for exactly? and why?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-13 Thread Mujtaba Hassan Madani
Hi Amos,

   Iam looking for building a Squid proxy server on Ubuntu for my LAN serving 
up to 25 PC's I just want the maximum potential of the server capability to 
enhance the network performance and gain better users expectation of the 
service.

regards



Mujtaba H,


From: squid-users  on behalf of Amos 
Jeffries 
Sent: Wednesday, September 12, 2018 2:54:37 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On 13/09/18 2:16 AM, Mujtaba   Hassan Madani wrote:
> Dear Squid Team,
>
>  how does content provider prevent it from been cached while passing
> through squid proxy it's by a copy right law

No. Contents which can be transferred through a proxy are implicitly
licensed for re-distribution.

Legal issues are usually encountered only around interception or
modification of content.


> or some  encryption is

Sometimes.

> implemented in the traffic ?

and other features built into HTTP protocol.


> and where can I find the contents that been
> cached on my squid proxy ?
>

Depends on your config. Usually in the machine RAM.

What are you looking for exactly? and why?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-12 Thread Amos Jeffries
On 13/09/18 2:16 AM, Mujtaba   Hassan Madani wrote:
> Dear Squid Team,
> 
>      how does content provider prevent it from been cached while passing
> through squid proxy it's by a copy right law

No. Contents which can be transferred through a proxy are implicitly
licensed for re-distribution.

Legal issues are usually encountered only around interception or
modification of content.


> or some  encryption is

Sometimes.

> implemented in the traffic ?

and other features built into HTTP protocol.


> and where can I find the contents that been
> cached on my squid proxy ?
> 

Depends on your config. Usually in the machine RAM.

What are you looking for exactly? and why?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-12 Thread Mujtaba Hassan Madani
Dear Squid Team,

 how does content provider prevent it from been cached while passing 
through squid proxy it's by a copy right law or some  encryption is implemented 
in the traffic ? and where can I find the contents that been cached on my squid 
proxy ?

thanks for your assistant

Mujtaba H,


From: squid-users  on behalf of 
Antony Stone 
Sent: Tuesday, September 11, 2018 8:54:05 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid Cache Server

On Tuesday 11 September 2018 at 10:43:13, Mujtaba Hassan Madani wrote:

> Hi Squid team,
>
> I just want to no if squid can cache software for example windows
> update, Java,etc.

Squid doesn't care what a file is for - whether it's "software", web pages,
images, music, video...

Squid will try to cache anything which gets requested through it, no matter
what it is.

Whether or not any given thing *can* be cached is far more up to the content
provider to decide - there are various HTTP headers they can use to say "don't
cache this" or similar, and some things which you can download have different
URLs at different times, and Squid can't tell that they are actually the same
thing.

So, yes, Squid _can_ cache "software".  But just as with any other type of
content, the provider may tell Squid that is isn't allowed to.


Regards,


Antony.

--
What makes you think I know what I'm talking about?
I just have more O'Reilly books than most people.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Server

2018-09-11 Thread Antony Stone
On Tuesday 11 September 2018 at 10:43:13, Mujtaba Hassan Madani wrote:

> Hi Squid team,
> 
> I just want to no if squid can cache software for example windows
> update, Java,etc.

Squid doesn't care what a file is for - whether it's "software", web pages, 
images, music, video...

Squid will try to cache anything which gets requested through it, no matter 
what it is.

Whether or not any given thing *can* be cached is far more up to the content 
provider to decide - there are various HTTP headers they can use to say "don't 
cache this" or similar, and some things which you can download have different 
URLs at different times, and Squid can't tell that they are actually the same 
thing.

So, yes, Squid _can_ cache "software".  But just as with any other type of 
content, the provider may tell Squid that is isn't allowed to.


Regards,


Antony.

-- 
What makes you think I know what I'm talking about?
I just have more O'Reilly books than most people.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Cache Server

2018-09-11 Thread Mujtaba Hassan Madani
Hi Squid team,

I just want to no if squid can cache software for example windows update, 
Java,etc.

regards


Mujtaba Hassan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache takes a break

2017-09-13 Thread Vieri
Thanks for the suggestion. I'm sure ufdbguard works great even though it's not 
maintained/updated on my distro (Gentoo).

I use ready-made helpers/redirectors like squidGuard on other systems.
However, on this system I wanted to avoid depending on extra software. I also 
wanted to make my own helper so I could then combine Squid ACLs and do things 
such as:
- block access to blacklisted URLs on a Squid setup with transparent ssl_bump 
(no proxy auth)

- show custom deny web page with optional auth form to bypass this restriction
- authenticate via LDAP using a custom web form, and insert the user's client 
IP address into a database with a timeout
- auto-redirect the request to the restricted web site so the user on a 
particular client host can access the site for a given time frame

- use a squid ACL to look up the user's host IP address in the DB, and decide 
to allow or not


In any case, I've been experiencing lots of issues with Squid during the past 2 
weeks. I can finally say that I've fine-tuned my setup thanks to the great help 
I found on this ML. One of the things that were nagging me was the helper part. 
Knowing how helpers work, and how they can be optimized on heavy traffic loads 
is "a good thing". For starters, I did not know how to use the concurrency 
option and how the use of it could benefit overall performance.


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache takes a break

2017-09-12 Thread Eliezer Croitoru
Well, the ready to use products are not always what you need or want.
Even squid is not good enough for many scenarios...

If it works with shallalist it's nice but not the real deal for most cases.
Vieri might or might not clarify his scenario, but the issue here is not other 
then working with squid and a helper.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri
Sent: Tuesday, September 12, 2017 23:54
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid cache takes a break

It is just enough not to reinvent the wheel. What needs op - already
exists and is called ufdbguard. And it's works perfectly with shallalist :)


13.09.2017 2:51, Eliezer Croitoru пишет:
> I just must add that if you understand how TCP works(which the helpers use to 
> communicate with squid) then it makes sense that it is possible that...
> The sender (ie squid) sent 100 lines but the client software yet to process 
> them since it's in the OS or other software\hardware related buffer.
>
> For me it was hard to understand at first since it's an STDIN\STDOUT 
> interface so it would block after every write but it's not...
>
> There is a possibility that if the helper can process every incoming request 
> with threading or other method of concurrency then the performance of the 
> helper and by that squid will be better but if only using the basic buffer 
> works fine for you then great.
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Amos Jeffries
> Sent: Tuesday, September 12, 2017 16:08
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] squid cache takes a break
>
> On 12/09/17 22:59, Vieri wrote:
>> 
>> From: Amos Jeffries <squ...@treenet.co.nz>
>>> That is all it needs to do to begin with; parse off the numeric value
>>> from the input line and send it back as prefix on the output line. The
>>> helper does not need threading or anything particularly special for the > 
>>> minimal support.
>>
>> I thought it had to be asynchronous.
>> The docs say "Only used with helpers capable of processing more than one 
>> query at a time."
>>
>> Example:
>> Squid sends "1 URI1" (or whatever) to the helper.
>> It does not wait for an immediate response.
>> In fact, Squid can send "2 URI2" before getting the reply to ID 1, right?
> Yes.
>
>
>> In my case, the helper is synchronous, non-MT. I don't think it will improve 
>> the time responses per-se.
>>
>> In any case, my helper won't be able to process more than one query AT A 
>> TIME.
>>
>> I tried it anyway. So here's the relevant code:
>>
>> while(  )
>> {
>> s/^\s*//;
>> s/\s*$//;
>> my @squidin = split;
>> my $squidn = scalar @squidin;
>> undef $url;
>> undef $channelid;
>> if ( ($squidn == 2 ) && (defined $squidin[0]) && ($squidin[0] =~ /^\d+?$/) ) 
>> {
>> $channelid = $squidin[0];
>> $url = $squidin[1] if (defined $squidin[1]);
>> } else {
>> $url = $squidin[0] if (defined $squidin[0]);
>> }
>>
>> [...]
>> logtofile("Channel-ID: ".$channelid."\n") if ((defined $channelid) && 
>> ($debug >= 1));
>>
>> [...do DB lookups, reply accordingly...]
>>
>> if (defined $channelid) {
>> print( $channelid." OK\n" );
>> logtofile( $channelid." OK\n" ) if ($debug >= 1);
>> } else {
>> print( "OK\n" );
>> logtofile( "OK\n" ) if ($debug >= 1);
>> }
>> [...similar responses for ERR messages...]
>> }
>>
>> Here's the relevant squid.conf line:
>> external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
>> children-startup=10 children-idle=3 concurrency=8 %URI 
>> /opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]
>>
>> How can I check in the Squid log that concurrency is "working"?
> Section 82, level 2 or 4 should log the queries.
>
> Better info is in the cachemgr/squidclient "external_acl" report. Each 
> helper is listed with its total and summary stats for each helper child.
>
>> If the helper logs to a text file as in the trimmed code above, I notice 
>> that the channel I

Re: [squid-users] squid cache takes a break

2017-09-12 Thread Yuri
It is just enough not to reinvent the wheel. What needs op - already
exists and is called ufdbguard. And it's works perfectly with shallalist :)


13.09.2017 2:51, Eliezer Croitoru пишет:
> I just must add that if you understand how TCP works(which the helpers use to 
> communicate with squid) then it makes sense that it is possible that...
> The sender (ie squid) sent 100 lines but the client software yet to process 
> them since it's in the OS or other software\hardware related buffer.
>
> For me it was hard to understand at first since it's an STDIN\STDOUT 
> interface so it would block after every write but it's not...
>
> There is a possibility that if the helper can process every incoming request 
> with threading or other method of concurrency then the performance of the 
> helper and by that squid will be better but if only using the basic buffer 
> works fine for you then great.
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Amos Jeffries
> Sent: Tuesday, September 12, 2017 16:08
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] squid cache takes a break
>
> On 12/09/17 22:59, Vieri wrote:
>> 
>> From: Amos Jeffries <squ...@treenet.co.nz>
>>> That is all it needs to do to begin with; parse off the numeric value
>>> from the input line and send it back as prefix on the output line. The
>>> helper does not need threading or anything particularly special for the > 
>>> minimal support.
>>
>> I thought it had to be asynchronous.
>> The docs say "Only used with helpers capable of processing more than one 
>> query at a time."
>>
>> Example:
>> Squid sends "1 URI1" (or whatever) to the helper.
>> It does not wait for an immediate response.
>> In fact, Squid can send "2 URI2" before getting the reply to ID 1, right?
> Yes.
>
>
>> In my case, the helper is synchronous, non-MT. I don't think it will improve 
>> the time responses per-se.
>>
>> In any case, my helper won't be able to process more than one query AT A 
>> TIME.
>>
>> I tried it anyway. So here's the relevant code:
>>
>> while(  )
>> {
>> s/^\s*//;
>> s/\s*$//;
>> my @squidin = split;
>> my $squidn = scalar @squidin;
>> undef $url;
>> undef $channelid;
>> if ( ($squidn == 2 ) && (defined $squidin[0]) && ($squidin[0] =~ /^\d+?$/) ) 
>> {
>> $channelid = $squidin[0];
>> $url = $squidin[1] if (defined $squidin[1]);
>> } else {
>> $url = $squidin[0] if (defined $squidin[0]);
>> }
>>
>> [...]
>> logtofile("Channel-ID: ".$channelid."\n") if ((defined $channelid) && 
>> ($debug >= 1));
>>
>> [...do DB lookups, reply accordingly...]
>>
>> if (defined $channelid) {
>> print( $channelid." OK\n" );
>> logtofile( $channelid." OK\n" ) if ($debug >= 1);
>> } else {
>> print( "OK\n" );
>> logtofile( "OK\n" ) if ($debug >= 1);
>> }
>> [...similar responses for ERR messages...]
>> }
>>
>> Here's the relevant squid.conf line:
>> external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
>> children-startup=10 children-idle=3 concurrency=8 %URI 
>> /opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]
>>
>> How can I check in the Squid log that concurrency is "working"?
> Section 82, level 2 or 4 should log the queries.
>
> Better info is in the cachemgr/squidclient "external_acl" report. Each 
> helper is listed with its total and summary stats for each helper child.
>
>> If the helper logs to a text file as in the trimmed code above, I notice 
>> that the channel ID is always 0. I get messages such as:
>> Channel-ID: 0
>> 0 OK
>> Channel-ID: 0
>> 0 ERR ...
>>
>> Is this expected?
> Maybe.
>
> If you make the helper pause a bit and throw a large number of different 
> URLs at Squid you should see it grow a bit higher than 0.
>
>> Despite this, I can see that the number of helper processes does not 
>> increase over time for now, and that HTTP/S client browsing is responsive 
>> enough.
>> # ps aux | grep -c squid_url_lookup.pl
>> 11
>>
> Yay.
>
>> One last thing. I'm using:
>> cache_dir diskd /var/cache/squid 100 16 256
>> I may want to try to comment out this directive for improved I/O performance.
>>
>> Thanks,
>>
>> Vieri
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users




signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache takes a break

2017-09-12 Thread Eliezer Croitoru
I just must add that if you understand how TCP works(which the helpers use to 
communicate with squid) then it makes sense that it is possible that...
The sender (ie squid) sent 100 lines but the client software yet to process 
them since it's in the OS or other software\hardware related buffer.

For me it was hard to understand at first since it's an STDIN\STDOUT interface 
so it would block after every write but it's not...

There is a possibility that if the helper can process every incoming request 
with threading or other method of concurrency then the performance of the 
helper and by that squid will be better but if only using the basic buffer 
works fine for you then great.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Tuesday, September 12, 2017 16:08
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid cache takes a break

On 12/09/17 22:59, Vieri wrote:
> 
> 
> From: Amos Jeffries <squ...@treenet.co.nz>
>>
>> That is all it needs to do to begin with; parse off the numeric value
>> from the input line and send it back as prefix on the output line. The
> 
>> helper does not need threading or anything particularly special for the > 
>> minimal support.
> 
> 
> I thought it had to be asynchronous.
> The docs say "Only used with helpers capable of processing more than one 
> query at a time."
> 
> Example:
> Squid sends "1 URI1" (or whatever) to the helper.
> It does not wait for an immediate response.
> In fact, Squid can send "2 URI2" before getting the reply to ID 1, right?

Yes.


> In my case, the helper is synchronous, non-MT. I don't think it will improve 
> the time responses per-se.
> 
> In any case, my helper won't be able to process more than one query AT A TIME.
> 
> I tried it anyway. So here's the relevant code:
> 
> while(  )
> {
> s/^\s*//;
> s/\s*$//;
> my @squidin = split;
> my $squidn = scalar @squidin;
> undef $url;
> undef $channelid;
> if ( ($squidn == 2 ) && (defined $squidin[0]) && ($squidin[0] =~ /^\d+?$/) ) {
> $channelid = $squidin[0];
> $url = $squidin[1] if (defined $squidin[1]);
> } else {
> $url = $squidin[0] if (defined $squidin[0]);
> }
> 
> [...]
> logtofile("Channel-ID: ".$channelid."\n") if ((defined $channelid) && ($debug 
> >= 1));
> 
> [...do DB lookups, reply accordingly...]
> 
> if (defined $channelid) {
> print( $channelid." OK\n" );
> logtofile( $channelid." OK\n" ) if ($debug >= 1);
> } else {
> print( "OK\n" );
> logtofile( "OK\n" ) if ($debug >= 1);
> }
> [...similar responses for ERR messages...]
> }
> 
> Here's the relevant squid.conf line:
> external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
> children-startup=10 children-idle=3 concurrency=8 %URI 
> /opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]
> 
> How can I check in the Squid log that concurrency is "working"?

Section 82, level 2 or 4 should log the queries.

Better info is in the cachemgr/squidclient "external_acl" report. Each 
helper is listed with its total and summary stats for each helper child.

> 
> If the helper logs to a text file as in the trimmed code above, I notice that 
> the channel ID is always 0. I get messages such as:
> Channel-ID: 0
> 0 OK
> Channel-ID: 0
> 0 ERR ...
> 
> Is this expected?

Maybe.

If you make the helper pause a bit and throw a large number of different 
URLs at Squid you should see it grow a bit higher than 0.

> 
> Despite this, I can see that the number of helper processes does not increase 
> over time for now, and that HTTP/S client browsing is responsive enough.
> # ps aux | grep -c squid_url_lookup.pl
> 11
> 

Yay.

> One last thing. I'm using:
> cache_dir diskd /var/cache/squid 100 16 256
> I may want to try to comment out this directive for improved I/O performance.
> 
> Thanks,
> 
> Vieri

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache takes a break

2017-09-12 Thread Amos Jeffries

On 12/09/17 22:59, Vieri wrote:



From: Amos Jeffries 


That is all it needs to do to begin with; parse off the numeric value
from the input line and send it back as prefix on the output line. The



helper does not need threading or anything particularly special for the > 
minimal support.



I thought it had to be asynchronous.
The docs say "Only used with helpers capable of processing more than one query at a 
time."

Example:
Squid sends "1 URI1" (or whatever) to the helper.
It does not wait for an immediate response.
In fact, Squid can send "2 URI2" before getting the reply to ID 1, right?


Yes.



In my case, the helper is synchronous, non-MT. I don't think it will improve 
the time responses per-se.

In any case, my helper won't be able to process more than one query AT A TIME.

I tried it anyway. So here's the relevant code:

while(  )
{
s/^\s*//;
s/\s*$//;
my @squidin = split;
my $squidn = scalar @squidin;
undef $url;
undef $channelid;
if ( ($squidn == 2 ) && (defined $squidin[0]) && ($squidin[0] =~ /^\d+?$/) ) {
$channelid = $squidin[0];
$url = $squidin[1] if (defined $squidin[1]);
} else {
$url = $squidin[0] if (defined $squidin[0]);
}

[...]
logtofile("Channel-ID: ".$channelid."\n") if ((defined $channelid) && ($debug 
>= 1));

[...do DB lookups, reply accordingly...]

if (defined $channelid) {
print( $channelid." OK\n" );
logtofile( $channelid." OK\n" ) if ($debug >= 1);
} else {
print( "OK\n" );
logtofile( "OK\n" ) if ($debug >= 1);
}
[...similar responses for ERR messages...]
}

Here's the relevant squid.conf line:
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=10 children-idle=3 concurrency=8 %URI 
/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]

How can I check in the Squid log that concurrency is "working"?


Section 82, level 2 or 4 should log the queries.

Better info is in the cachemgr/squidclient "external_acl" report. Each 
helper is listed with its total and summary stats for each helper child.




If the helper logs to a text file as in the trimmed code above, I notice that 
the channel ID is always 0. I get messages such as:
Channel-ID: 0
0 OK
Channel-ID: 0
0 ERR ...

Is this expected?


Maybe.

If you make the helper pause a bit and throw a large number of different 
URLs at Squid you should see it grow a bit higher than 0.




Despite this, I can see that the number of helper processes does not increase 
over time for now, and that HTTP/S client browsing is responsive enough.
# ps aux | grep -c squid_url_lookup.pl
11



Yay.


One last thing. I'm using:
cache_dir diskd /var/cache/squid 100 16 256
I may want to try to comment out this directive for improved I/O performance.

Thanks,

Vieri


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache takes a break

2017-09-12 Thread Vieri


From: Amos Jeffries 
>
> That is all it needs to do to begin with; parse off the numeric value 
> from the input line and send it back as prefix on the output line. The 

> helper does not need threading or anything particularly special for the > 
> minimal support.


I thought it had to be asynchronous.
The docs say "Only used with helpers capable of processing more than one query 
at a time."

Example:
Squid sends "1 URI1" (or whatever) to the helper.
It does not wait for an immediate response.
In fact, Squid can send "2 URI2" before getting the reply to ID 1, right?
In my case, the helper is synchronous, non-MT. I don't think it will improve 
the time responses per-se.

In any case, my helper won't be able to process more than one query AT A TIME.

I tried it anyway. So here's the relevant code:

while(  )
{
s/^\s*//;
s/\s*$//;
my @squidin = split;
my $squidn = scalar @squidin;
undef $url;
undef $channelid;
if ( ($squidn == 2 ) && (defined $squidin[0]) && ($squidin[0] =~ /^\d+?$/) ) {
$channelid = $squidin[0];
$url = $squidin[1] if (defined $squidin[1]);
} else {
$url = $squidin[0] if (defined $squidin[0]);
}

[...]
logtofile("Channel-ID: ".$channelid."\n") if ((defined $channelid) && ($debug 
>= 1));

[...do DB lookups, reply accordingly...]

if (defined $channelid) {
print( $channelid." OK\n" );
logtofile( $channelid." OK\n" ) if ($debug >= 1);
} else {
print( "OK\n" );
logtofile( "OK\n" ) if ($debug >= 1);
}
[...similar responses for ERR messages...]
}

Here's the relevant squid.conf line:
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=10 children-idle=3 concurrency=8 %URI 
/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]

How can I check in the Squid log that concurrency is "working"?

If the helper logs to a text file as in the trimmed code above, I notice that 
the channel ID is always 0. I get messages such as:
Channel-ID: 0
0 OK
Channel-ID: 0
0 ERR ...

Is this expected?

Despite this, I can see that the number of helper processes does not increase 
over time for now, and that HTTP/S client browsing is responsive enough.
# ps aux | grep -c squid_url_lookup.pl
11

One last thing. I'm using:
cache_dir diskd /var/cache/squid 100 16 256
I may want to try to comment out this directive for improved I/O performance.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache takes a break

2017-09-11 Thread Vieri


From: Amos Jeffries 
>
> a) start fewer helpers at a time.
> 
> b) reduce cache_mem.
> 
> c) add concurrency support to the helpers.


So I decreased the startup, idle, cache_mem values:

# egrep 'startup=|idle=' squid.conf
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=10 children-idle=3 %URI 
/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]
sslcrtd_children 128 startup=10 idle=3

# grep cache_mem squid.conf
cache_mem 64 MB

I also set debug_options to "ALL,1 5,9 50,6 51,3 54,9".

As far as concurrency is concerned, I never programmed a helper to support this 
feature.
If it were to be done in Perl, do you know by any chance if it would require 
Perl6 "promises" with await/start function calls?

Currently, my "bllookup" helper is a simple Perl5 script which reads from 
standard input like so:

while(  )
{
[...lookup URI in a MySQL database and reply accordingly to Squid...]
}

It does not handle the channel-ID field.

I haven't found many Squid concurrency-enabled helper examples out there.

By the way, I see that Squid defaults to IPv6 for helper communications. I 
suppose it wouldn't make any real difference if I tried "ipv4" with 
"external_acl_type".
If I don't get any new info next time Squid slows down to a crawl, I'll 
probably try ipv4 just for kicks.

What I still don't get is how long it takes for Squid to get back to work after 
I do a complete restart (after thoroughly killing all related processes, 
including helpers). I'm talking more than 5 minutes here...
If I ever get the same issue again, I understand that I can:

- stop squid & eventually kill all apparently stalled processes

- modify squid.conf, and decrease or comment out all *startup= and *idle= 
options

- start squid

At this point, I should expect Squid to be up and serving within a reasonable 
amount of time, even if I may get squid warnings later on asking me to increase 
those values.
Or maybe not, because the Linux kernel might be busy cleaning up the swap space 
anyway?

One last thing. I'm running squid 3.5.26. I'll try to upgrade to 3.5.27 asap.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache takes a break

2017-09-08 Thread Vieri
Hi,

Sorry for the title, but I really don't know how to describe what just happened 
today. It's really odd.

I previously posted a few similar issues which were all fixed if I increased 
certain parameters (ulimits, children-{max,startup,idle}, TTL, etc.).

This time however, after several days trouble-free I got another show-stopper. 
The local squid cache stopped serving for almost half an hour. After that, it 
all started working again magically. I had the chance to log into the server 
with ssh and try a few things:

- In the cache log I could see these messages:
Starting new bllookup helpers...
helperOpenServers: Starting 10/80 'squid_url_lookup.pl' processes
WARNING: Cannot run 
'/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl' process.
WARNING: Cannot run 
'/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl' process.
WARNING: Cannot run 
'/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl' process.

It doesn't say much as to why it "cannot run" the external program.

This is how the program is defined in squid.conf:
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=40 children-idle=10 %URI 
/opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]

Other than that, the log is pretty quiet.

The HTTP clients do not get served at all. They keep waiting for a reply.

# ps aux | grep squid
root  3043  0.0  0.0  84856  1728 ?Ss   Aug31   0:00 
/usr/sbin/squid -YC -f /etc/squid/squid.http.conf -n squidhttp
squid 3046  0.0  0.0 128232 31052 ?SAug31   0:35 (squid-1) -YC 
-f /etc/squid/squid.http.conf -n squidhttp
root  3538  0.0  0.0  86912  1740 ?Ss   Aug31   0:00 
/usr/sbin/squid -YC -f /etc/squid/squid.https.conf -n squidhttps
squid 3540  0.0  0.1 134616 35608 ?SAug31   1:09 (squid-1) -YC 
-f /etc/squid/squid.https.conf -n squidhttps
root  5690  0.0  0.0  87444  1736 ?Ss   Aug31   0:00 
/usr/sbin/squid -YC -f /etc/squid/squid.conf -n squid
squid 5694  2.4  6.5 3769624 2136968 ? SAug31 293:24 (squid-1) -YC 
-f /etc/squid/squid.conf -n squid
squid 5727  0.0  0.0   4008   524 ?SAug31   0:01 (unlinkd)
squid 5728  0.0  0.0  13904  1576 ?SAug31   2:09 diskd 5830660 
5830661 5830662
squid11927  0.0  0.0   4156   644 ?SSep07   0:36 
(logfile-daemon) /var/log/squid/access.log
squid11937  1.7  0.0  41792  6232 ?SSep07  31:08 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11939  0.1  0.0  41776  6288 ?SSep07   3:09 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11940  0.0  0.0  41784  6356 ?SSep07   0:28 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11941  0.0  0.0  41800  6308 ?SSep07   0:07 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11942  0.0  0.0  41800  6308 ?SSep07   0:02 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11943  0.0  0.0  41784  6320 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11944  0.0  0.0  41784  6068 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11945  0.0  0.0  41780  6372 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11946  0.0  0.0  41800  6852 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11947  0.0  0.0  41784  6756 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11948  0.0  0.0  41792  6784 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11949  0.0  0.0  41780  6672 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11950  0.0  0.0  41780  6660 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11951  0.0  0.0  41760  6308 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11952  0.0  0.0  41772  6336 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11953  0.0  0.0  41772  6284 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11954  0.0  0.0  41776  6956 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11955  0.0  0.0  41772  6524 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11956  0.0  0.0  41772  6664 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11958  0.0  0.0  41772  6284 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11959  0.0  0.0  40444  3368 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11960  0.0  0.0  40444  3368 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11968  0.0  0.0  40444  3368 ?SSep07   0:00 (ssl_crtd) -s 
/var/lib/squid/ssl_db -M 16MB
squid11969  0.0  0.0  40444  3368 ?S 

Re: [squid-users] squid cache peer not rotating over round robin !

2017-09-05 Thread Amos Jeffries

On 05/09/17 07:17, --Ahmad-- wrote:

hello folks

I’m trying to rotate squid request over several peers
my config are below but they are only stuck with 1 peer .

acl custNet dstdomain  .trustly.com  .ing.nl  .adyen.com .rabobank.nl .abn.nl 
.iplocation.net .abnamro.com .abnamro.nl .abnamro.nl
cache_peer 66.78.18.1 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.2 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.3 parent 41311 0 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.4 parent 41311 0 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.5 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.6 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.7 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.8 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.9 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
#
never_direct allow custNet
http_access allow custNet


any idea what wrong with round robin above ?



The lines for peer *.3 and *.4 contain an extra '0' in the options area 
after ICP-port which may be preventing Squid from loading this config.


Other than that nothing is visible from the details you mention.


Some random possibilities:

* Check that all peers are detected as ALIVE by Squid. Peers that are 
marked as DEAD (10 consecutive failures occured) are skipped in the load 
balancing algorithms. That 'no-query' disables the UDP checks used to 
detect peers going from DEAD to LIVE status, so error recovery will be 
quite slow.



* Check that the "requests" you are referring to are not all inside a 
single CONNECT tunnel. Tunnels count as a single request in plain-text 
HTTP no matter how much 'crypted traffic they contain.



I assume this is the same 3.5.22 you mentioned using in threads from a 
few days ago.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache peer not rotating over round robin !

2017-09-04 Thread --Ahmad--
hello folks 

I’m trying to rotate squid request over several peers 
my config are below but they are only stuck with 1 peer .

acl custNet dstdomain  .trustly.com  .ing.nl  .adyen.com .rabobank.nl .abn.nl 
.iplocation.net .abnamro.com .abnamro.nl .abnamro.nl
cache_peer 66.78.18.1 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.2 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.3 parent 41311 0 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.4 parent 41311 0 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.5 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.6 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.7 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.8 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
cache_peer 66.78.18.9 parent 41311 0 no-query round-robin no-digest no-tproxy 
proxy-only
#
never_direct allow custNet
http_access allow custNet


any idea what wrong with round robin above ?


cheers

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache peer based on source ip address rule

2017-08-14 Thread --Ahmad--
Dear squid users folks ,

i have many different cache peers I’m gonna send traffic to.

but the issue is i want to have different cache peer setup based on the source 
ip address  .


say the ip address of the user was 1.1.1.1

i want it go to the cache peer  with system below 
##
acl custNet dstdomain .mail.com  .mail.ru .trustly.com  .ing.nl .live.adyen.com 
 www.mail.com .uicdn.com . .9gag.com .tumblr.com .boredpanda.com 
.deref-mail.com .tutanota.com
cache_peer 12.13.250.251 parent  0 no-query no-digest
never_direct allow custNet
cache_peer_access 12.13.250.251 allow custNet
cache_peer_access 12.13.250.251 deny all
http_access allow custNet
#



say the source ip address was 2.2.2.2  i want it has the custom config of cache 
peer below :


##
acl custNet dstdomain .mail.com  .mail.ru .trustly.com  .ing.nl .live.adyen.com 
 www.mail.com .uicdn.com . .9gag.com .tumblr.com .boredpanda.com 
.deref-mail.com .tutanota.com
cache_peer 12.13.250.252 parent  0 no-query no-digest
never_direct allow custNet
cache_peer_access 12.13.250.252 allow custNet
cache_peer_access 12.13.250.252 deny all
http_access allow custNet
#


so I’m wondering @ the moment how can i modify squid config so that it forward 
to specific peer  based on the source ip address of the user who using the 
proxy 



kind regards 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache to Users at Full Bandwidth

2017-05-06 Thread joseph
you have to mark packet on mangle the DSCP(tos) = 12  first did you
and in squid  add   qos_flows tos local-hit=0x30 miss=0xFF

and in queues pic the marked packet name so  it will serve the cached HIT to
your clients
if you can not do it  i help u out



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Cache-to-Users-at-Full-Bandwidth-tp4682314p4682320.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache to Users at Full Bandwidth

2017-05-06 Thread christian brendan
On Sat, May 6, 2017 at 1:00 PM, <squid-users-requ...@lists.squid-cache.org>
wrote:

> Send squid-users mailing list submissions to
> squid-users@lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
> squid-users-requ...@lists.squid-cache.org
>
> You can reach the person managing the list at
> squid-users-ow...@lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>1. Squid Cache to Users at Full Bandwidth (christian brendan)
>2. Re: Squid Cache to Users at Full Bandwidth (Yuri Voinov)
>3. Re: Squid Cache to Users at Full Bandwidth (Antony Stone)
>
>
> --
>
> Message: 1
> Date: Fri, 5 May 2017 16:18:33 +0100
> From: christian brendan <bosscb.chrisb...@gmail.com>
> To: squid-users@lists.squid-cache.org,
> squid-users-ow...@lists.squid-cache.org
> Subject: [squid-users] Squid Cache to Users at Full Bandwidth
> Message-ID:
> 

Re: [squid-users] Squid Cache to Users at Full Bandwidth

2017-05-05 Thread Antony Stone
On Friday 05 May 2017 at 16:18:33, christian brendan wrote:

> Squid Version 3.5.20
> Cento 7
> Mikrotik RouterBoard v 6.39.1
> Users IP: 192.168.1.0/24
> Squid ip: 192.168.2.1
> 
> Traffic to squid is routed
> 
> i would like users to have full LAN bandwidth access to squid server, i
> have tried simple queue on mikrotik but it seems not to be working.

How does it "not seem to be working"?

A "queue" on the Mikrotik would normally be used to restrict bandwidth, not 
increase it.  Please give details of how you have implemented this queue.

A few further questions:

1. Is Squid running on the Routerboard, or on a rather more capable machine?

2. If the client machines access the Internet directly (ie: not via Squid), do 
they also go via the Routerboard?

3. How are you measuring bandwidth for the clients?


Antony.

-- 
I thought I had type A blood, but it turned out to be a typo.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache to Users at Full Bandwidth

2017-05-05 Thread Yuri Voinov
http://wiki.squid-cache.org/


05.05.2017 21:18, christian brendan пишет:
> Squid Version 3.5.20
> Cento 7
> Mikrotik RouterBoard v 6.39.1
> Users IP: 192.168.1.0/24 
> Squid ip: 192.168.2.1
>
> Traffic to squid is routed
>
> i would like users to have full LAN bandwidth access to squid server,
> i have tried simple queue on mikrotik but it seems not to be working.
>
> Any guide will be appreciated.
>
> Best Regards
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Cache to Users at Full Bandwidth

2017-05-05 Thread christian brendan
Squid Version 3.5.20
Cento 7
Mikrotik RouterBoard v 6.39.1
Users IP: 192.168.1.0/24
Squid ip: 192.168.2.1

Traffic to squid is routed

i would like users to have full LAN bandwidth access to squid server, i
have tried simple queue on mikrotik but it seems not to be working.

Any guide will be appreciated.

Best Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache analysis

2017-04-06 Thread Antony Stone
On Thursday 06 April 2017 at 12:27:54, Punyasloka Arya wrote:

> squid version:3.3
> OS:centos

Which version of CentOS?

How was Squid installed?

Precisely which version of 3.3 are you using?

> The squid cache is not functioning properly

You'll have to be more specific than that - what *is* working, what is *not* 
working, what is the problem?

> Please suggest something to analyze or capture the log
> so that the cache service will improve.

Well, start with what you see in access.log

http://wiki.squid-cache.org/SquidFaq/SquidLogs

> Do we need to put some tool which will do it automatic?

Please tell us what "it" is - once we know what you want to do, we might be 
able to suggest ways of achieving it.


Antony.

-- 
Atheism is a non-prophet-making organisation.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache analysis

2017-04-06 Thread Punyasloka Arya

squid version:3.3
OS:centos

###
The squid cache is not functioning properly
###
Please suggest something to analyze or capture the log 
so that the cache service will improve.
Do we need to put some tool which will do it automatic?

from



Punyasloka Arya
Staffno:3880
Senior Research Engineer
Netops,TS(B)
C-DOT(B)
PHASE-1,ELECTRONICS CITY
BANGALORE-560100
INDIA
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache: Version 3.5.16 and ext_ldap_group_acl

2016-04-12 Thread Thomas Elsäßer

Am 12-04-2016 10:58, schrieb Amos Jeffries:

On 12/04/2016 8:36 p.m., Thomas Elsäßer wrote:

Dear all,

I call from Shell:

/usr/local/squid/libexec/ext_ldap_group_acl -d -R -b
"OU=UMW,DC=a,DC=b,DC=de" -D "...@a.b.de" -w "XXX" \
 -f
"(&(objectClass=person)(sAMAccountName=%v)(MemberOf=CN=%g,OU=DomLokaleGruppen,OU=Gruppen,OU=Benutzer,OU=Min-PRD,OU=XXX,DC=a,DC=b,DC=de))"
-h dc.a.b.de





And i trace the helper process, i can see that squid replace the %v 
with

usern...@a.b.de
So the helper give an ERR return to squid.

Where can i this configure , that passed variable is only the username 
?


That is the user name/label as provided to Squid by the auth helper. It
depends on whether the particular auth helper(s) you are using allow 
the

credentials domain to be cropped away.

Since it is using "@" symbol look at the Negotiate auth helper options.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
yes - sorry for the stupid questions - the minus r option is that what i 
need. thanks again!!!
auth_param negotiate program 
/usr/local/squid/libexec/negotiate_kerberos_auth -d -r -s  HTTP/...


Best wishes
Thomas
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache

2016-02-09 Thread turgut kalfaoğlu
Hi again.. I have a squid setup with two servers; one acting as "parent"
and only getting requests from the child,
and the other one actually serves people as a transparent accelerator
for the slow internet.

It works well normally, two things I could not get to work well:
1) SSL. I had many problems and gave up eventually. I haven't tried it
lately, now it's at 3.5.9, should I try it again, and is there a working
formula that works well?

2) www.rolex.com. For some reason, this site gives an access denied!  No
big deal, but just interesting.

Regards,
Turgut

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2016-02-09 Thread Amos Jeffries
On 10/02/2016 6:16 a.m., turgut kalfaoğlu wrote:
> Hi again.. I have a squid setup with two servers; one acting as "parent"
> and only getting requests from the child,
> and the other one actually serves people as a transparent accelerator
> for the slow internet.

What do you mean exactly? "transparent accelerator" is not an HTTP
related term, and the two traffic modes that are commonly called
"transparent" and "accel" are mutually exclusive things.

> 
> It works well normally, two things I could not get to work well:
> 1) SSL. I had many problems and gave up eventually. I haven't tried it
> lately, now it's at 3.5.9, should I try it again, and is there a working
> formula that works well?

The only setup that works well is not to intercept. All others vary in
amounts of trouble and success.

> 
> 2) www.rolex.com. For some reason, this site gives an access denied!  No
> big deal, but just interesting.
> 

If you want a useful answer your squid.conf will be needed. Please omit
comment (#...) lines when posting.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache peer issues

2015-12-21 Thread Alex Samad
Hi

seems like .12 is now available for me. I will apply and retest. is
there anything you would like me to do if I see it again ?

A

On 21 December 2015 at 21:26, Amos Jeffries  wrote:
> On 21/12/2015 2:00 p.m., Alex Samad wrote:
>> Hi
>>
>> running on centos 6.7
>>
>> 3.5.12 still not available on centos 6.
>>
>> rpm -qa | grep squid
>> squid-helpers-3.5.11-1.el6.x86_64
>> squid-3.5.11-1.el6.x86_64
>>
>> This is the 2 cache_peer statements I use
>>
>> # on alcdmz1
>> cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>> #cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>>
>> # on gsdmz1
>> #cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>> cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
>> no-query standby=10
>>
>> on alcdmz1 with export http_proxy pointing to alcdmz1
>>
>> wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> -O /dev/null
>> Setting --output-document (outputdocument) to /dev/null
>> DEBUG output created by Wget 1.12 on linux-gnu.
>>
>> --2015-12-21 11:58:05--
>> http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> Resolving alcdmz1... 10.32.20.111
>> Caching alcdmz1 => 10.32.20.111
>> Connecting to alcdmz1|10.32.20.111|:3128... connected.
>> Created socket 4.
>> Releasing 0x0101d540 (new refcount 1).
>>
>> ---request begin---
>> GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
>> User-Agent: Wget/1.12 (linux-gnu)
>> Accept: */*
>> Host: fonts.gstatic.com
>>
>> ---request end---
>> Proxy request sent, awaiting response...
>> ---response begin---
>> HTTP/1.1 200 OK
>> Content-Type: font/woff2
>> Access-Control-Allow-Origin: *
>> Timing-Allow-Origin: *
>> Date: Mon, 30 Nov 2015 04:06:16 GMT
>> Expires: Tue, 29 Nov 2016 04:06:16 GMT
>> Last-Modified: Mon, 06 Oct 2014 20:40:59 GMT
>> X-Content-Type-Options: nosniff
>> Server: sffe
>> Content-Length: 25604
>> X-XSS-Protection: 1; mode=block
>> Cache-Control: public, max-age=31536000
>> Age: 1803109
>> Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
>> than 1 day old
>> X-Cache: HIT from alcdmz1
>> X-Cache-Lookup: HIT from alcdmz1:3128
>> Via: 1.1 alcdmz1 (squid)
>> Connection: close
>>
>> ---response end---
>> 200 OK
>> Length: 25604 (25K) [font/woff2]
>> Saving to: `/dev/null'
>>
>> 100%[==>]
>> 25,604  --.-K/s   in 0s
>>
>> Closed fd 4
>> 2015-12-21 11:58:05 (1.01 GB/s) - `/dev/null' saved [25604/25604]
>>
>>
>> on gsdmz1
>>
>>
>> wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> -O /dev/null
>> Setting --output-document (outputdocument) to /dev/null
>> DEBUG output created by Wget 1.12 on linux-gnu.
>>
>> --2015-12-21 11:58:59--
>> http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
>> Resolving gsdmz1... 10.32.20.110
>> Caching gsdmz1 => 10.32.20.110
>> Connecting to gsdmz1|10.32.20.110|:3128... connected.
>> Created socket 4.
>> Releasing 0x010a2930 (new refcount 1).
>>
>> ---request begin---
>> GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
>> User-Agent: Wget/1.12 (linux-gnu)
>> Accept: */*
>> Host: fonts.gstatic.com
>>
>> ---request end---
>> Proxy request sent, awaiting response...
>> ---response begin---
>> HTTP/1.1 504 Gateway Timeout
>> Server: squid
>> Mime-Version: 1.0
>> Date: Mon, 21 Dec 2015 00:58:59 GMT
>> Content-Type: text/html;charset=utf-8
>> Content-Length: 3964
>> X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
>> Vary: Accept-Language
>> Content-Language: en
>> Age: 1450659540
>> Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
>> than 1 day old
>> Warning: 110 squid "Response is stale"
>> Warning: 111 squid "Revalidation failed"
>> X-Cache: HIT from alcdmz1
>> X-Cache-Lookup: HIT from alcdmz1:3128
>> X-Cache: MISS from gsdmz1
>> X-Cache-Lookup: MISS from gsdmz1:3128
>> Via: 1.1 alcdmz1 (squid), 1.1 gsdmz1 (squid)
>> Connection: close
>>
>> ---response end---
>> 504 Gateway Timeout
>> Closed fd 4
>> 2015-12-21 11:58:59 ERROR 504: Gateway Timeout.
>>
>>
>> so why does it work from alc and not from gs ???
>
> The alc fetch is going:
>   client->alc->Internet/parent
>
> The gs fetch is going:
>   client->gs->alc->Internet/parent
>
> This is shown in the Via headers.
>
>
> The alc sibling has a response cached which matches. But that required a
> revalidation. (The 113 and 110 Warning headers)
>
> The revalidation failed for some reason (the only-if-cached ?). So it
> output a 504 and sent that back to gs. (The 111 Warning header)
>
> There are several problems here:
> 1) why the revalidation is failing, and
> 2) why the gs peer is not re-trying the fetch via another server (parent
> or DIRECT) after the 504 happens.
> 3) The Age header says ~46yrs ago for the 504 being 

Re: [squid-users] squid cache peer issues

2015-12-21 Thread Amos Jeffries
On 21/12/2015 2:00 p.m., Alex Samad wrote:
> Hi
> 
> running on centos 6.7
> 
> 3.5.12 still not available on centos 6.
> 
> rpm -qa | grep squid
> squid-helpers-3.5.11-1.el6.x86_64
> squid-3.5.11-1.el6.x86_64
> 
> This is the 2 cache_peer statements I use
> 
> # on alcdmz1
> cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
> no-query standby=10
> #cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
> no-query standby=10
> 
> # on gsdmz1
> #cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
> no-query standby=10
> cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
> no-query standby=10
> 
> on alcdmz1 with export http_proxy pointing to alcdmz1
> 
> wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
> -O /dev/null
> Setting --output-document (outputdocument) to /dev/null
> DEBUG output created by Wget 1.12 on linux-gnu.
> 
> --2015-12-21 11:58:05--
> http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
> Resolving alcdmz1... 10.32.20.111
> Caching alcdmz1 => 10.32.20.111
> Connecting to alcdmz1|10.32.20.111|:3128... connected.
> Created socket 4.
> Releasing 0x0101d540 (new refcount 1).
> 
> ---request begin---
> GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
> User-Agent: Wget/1.12 (linux-gnu)
> Accept: */*
> Host: fonts.gstatic.com
> 
> ---request end---
> Proxy request sent, awaiting response...
> ---response begin---
> HTTP/1.1 200 OK
> Content-Type: font/woff2
> Access-Control-Allow-Origin: *
> Timing-Allow-Origin: *
> Date: Mon, 30 Nov 2015 04:06:16 GMT
> Expires: Tue, 29 Nov 2016 04:06:16 GMT
> Last-Modified: Mon, 06 Oct 2014 20:40:59 GMT
> X-Content-Type-Options: nosniff
> Server: sffe
> Content-Length: 25604
> X-XSS-Protection: 1; mode=block
> Cache-Control: public, max-age=31536000
> Age: 1803109
> Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
> than 1 day old
> X-Cache: HIT from alcdmz1
> X-Cache-Lookup: HIT from alcdmz1:3128
> Via: 1.1 alcdmz1 (squid)
> Connection: close
> 
> ---response end---
> 200 OK
> Length: 25604 (25K) [font/woff2]
> Saving to: `/dev/null'
> 
> 100%[==>]
> 25,604  --.-K/s   in 0s
> 
> Closed fd 4
> 2015-12-21 11:58:05 (1.01 GB/s) - `/dev/null' saved [25604/25604]
> 
> 
> on gsdmz1
> 
> 
> wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
> -O /dev/null
> Setting --output-document (outputdocument) to /dev/null
> DEBUG output created by Wget 1.12 on linux-gnu.
> 
> --2015-12-21 11:58:59--
> http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
> Resolving gsdmz1... 10.32.20.110
> Caching gsdmz1 => 10.32.20.110
> Connecting to gsdmz1|10.32.20.110|:3128... connected.
> Created socket 4.
> Releasing 0x010a2930 (new refcount 1).
> 
> ---request begin---
> GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
> User-Agent: Wget/1.12 (linux-gnu)
> Accept: */*
> Host: fonts.gstatic.com
> 
> ---request end---
> Proxy request sent, awaiting response...
> ---response begin---
> HTTP/1.1 504 Gateway Timeout
> Server: squid
> Mime-Version: 1.0
> Date: Mon, 21 Dec 2015 00:58:59 GMT
> Content-Type: text/html;charset=utf-8
> Content-Length: 3964
> X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
> Vary: Accept-Language
> Content-Language: en
> Age: 1450659540
> Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
> than 1 day old
> Warning: 110 squid "Response is stale"
> Warning: 111 squid "Revalidation failed"
> X-Cache: HIT from alcdmz1
> X-Cache-Lookup: HIT from alcdmz1:3128
> X-Cache: MISS from gsdmz1
> X-Cache-Lookup: MISS from gsdmz1:3128
> Via: 1.1 alcdmz1 (squid), 1.1 gsdmz1 (squid)
> Connection: close
> 
> ---response end---
> 504 Gateway Timeout
> Closed fd 4
> 2015-12-21 11:58:59 ERROR 504: Gateway Timeout.
> 
> 
> so why does it work from alc and not from gs ???

The alc fetch is going:
  client->alc->Internet/parent

The gs fetch is going:
  client->gs->alc->Internet/parent

This is shown in the Via headers.


The alc sibling has a response cached which matches. But that required a
revalidation. (The 113 and 110 Warning headers)

The revalidation failed for some reason (the only-if-cached ?). So it
output a 504 and sent that back to gs. (The 111 Warning header)

There are several problems here:
1) why the revalidation is failing, and
2) why the gs peer is not re-trying the fetch via another server (parent
or DIRECT) after the 504 happens.
3) The Age header says ~46yrs ago for the 504 being created,
suspiciously close to 1 Jan 1970 / unix epoch 0-second.


It seems to me you have managed to reproduce


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache peer issues

2015-12-20 Thread Alex Samad
Hi

running on centos 6.7

3.5.12 still not available on centos 6.

rpm -qa | grep squid
squid-helpers-3.5.11-1.el6.x86_64
squid-3.5.11-1.el6.x86_64

This is the 2 cache_peer statements I use

# on alcdmz1
cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10
#cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10

# on gsdmz1
#cache_peer gsdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10
cache_peer alcdmz1.yieldbroker.com sibling 3128 4827 proxy-only htcp
no-query standby=10

on alcdmz1 with export http_proxy pointing to alcdmz1

wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
-O /dev/null
Setting --output-document (outputdocument) to /dev/null
DEBUG output created by Wget 1.12 on linux-gnu.

--2015-12-21 11:58:05--
http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
Resolving alcdmz1... 10.32.20.111
Caching alcdmz1 => 10.32.20.111
Connecting to alcdmz1|10.32.20.111|:3128... connected.
Created socket 4.
Releasing 0x0101d540 (new refcount 1).

---request begin---
GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: fonts.gstatic.com

---request end---
Proxy request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK
Content-Type: font/woff2
Access-Control-Allow-Origin: *
Timing-Allow-Origin: *
Date: Mon, 30 Nov 2015 04:06:16 GMT
Expires: Tue, 29 Nov 2016 04:06:16 GMT
Last-Modified: Mon, 06 Oct 2014 20:40:59 GMT
X-Content-Type-Options: nosniff
Server: sffe
Content-Length: 25604
X-XSS-Protection: 1; mode=block
Cache-Control: public, max-age=31536000
Age: 1803109
Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
than 1 day old
X-Cache: HIT from alcdmz1
X-Cache-Lookup: HIT from alcdmz1:3128
Via: 1.1 alcdmz1 (squid)
Connection: close

---response end---
200 OK
Length: 25604 (25K) [font/woff2]
Saving to: `/dev/null'

100%[==>]
25,604  --.-K/s   in 0s

Closed fd 4
2015-12-21 11:58:05 (1.01 GB/s) - `/dev/null' saved [25604/25604]


on gsdmz1


wget -d  http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
-O /dev/null
Setting --output-document (outputdocument) to /dev/null
DEBUG output created by Wget 1.12 on linux-gnu.

--2015-12-21 11:58:59--
http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2
Resolving gsdmz1... 10.32.20.110
Caching gsdmz1 => 10.32.20.110
Connecting to gsdmz1|10.32.20.110|:3128... connected.
Created socket 4.
Releasing 0x010a2930 (new refcount 1).

---request begin---
GET http://fonts.gstatic.com/s/lato/v11/H2DMvhDLycM56KNuAtbJYA.woff2 HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: fonts.gstatic.com

---request end---
Proxy request sent, awaiting response...
---response begin---
HTTP/1.1 504 Gateway Timeout
Server: squid
Mime-Version: 1.0
Date: Mon, 21 Dec 2015 00:58:59 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3964
X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
Vary: Accept-Language
Content-Language: en
Age: 1450659540
Warning: 113 alcdmz1 (squid) This cache hit is still fresh and more
than 1 day old
Warning: 110 squid "Response is stale"
Warning: 111 squid "Revalidation failed"
X-Cache: HIT from alcdmz1
X-Cache-Lookup: HIT from alcdmz1:3128
X-Cache: MISS from gsdmz1
X-Cache-Lookup: MISS from gsdmz1:3128
Via: 1.1 alcdmz1 (squid), 1.1 gsdmz1 (squid)
Connection: close

---response end---
504 Gateway Timeout
Closed fd 4
2015-12-21 11:58:59 ERROR 504: Gateway Timeout.


so why does it work from alc and not from gs ???

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2015-09-30 Thread Antony Stone
On Wednesday 30 September 2015 at 21:35:32, Magic Link wrote:

> Hi,i configure squid to use cache. It seems to work because when i did a
> try with a software's download, the second download is TCP_HIT in the
> access.log.

Congratulations.

> The question i have is : why the majority of requests can't be cached (i
> have a lot of tcp_miss/200) ?

Whether that is the *majority* of requests depends greatly upon what sort of 
content you are requesting.

That may sound trite, but I can't think of a better way of expressing it.  Get 
10 users on your network to download the same image file (no, not all at the 
same time), and you'll see that the 9 who were not first get the content a lot 
faster than the 1 who was first.

If they're downloading other types of content, though, you may not get such a 
"good" result.

> i found that dynamic content is not cached but i don't understand.

What does "dynamic" mean?  It means it is not fixed / constant / stable.  In 
other words, requesting the "same content" twice might result in different 
answers, therefore if Squid gave you the first answer again in response to the 
second request, that would not be what you would have got from the remote 
server, and is therefore wrong.

Example: eBay

You look up an auction which is due to end in 2 minutes.  You see the current 
price and the number of bids (plus the details of what it is, etc).

5 minutes later you request the same URL again.  It would be wrong of Squid to 
show you the same page, with 2 minutes to go, and the bidding at X currency 
units, from Y other bidders.  No, Squid should realise that the content it 
previously requested is now stale, and it needs to fetch the new current 
content and show you who won the auction and for how much.

That is dynamic content.  The remote server tells Squid that there is no point 
in caching the page it just fetched, because within 1 second it may well be 
stale and need fetching anew.

A lot of the Internet works that way these days.


Regards,


Antony.


-- 
"Life is just a lot better if you feel you're having 10 [small] wins a day 
rather than a [big] win every 10 years or so."

 - Chris Hadfield, former skiing (and ski racing) instructor

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid cache

2015-09-30 Thread Magic Link
Hi,i configure squid to use cache. It seems to work because when i did a try 
with a software's download, the second download is TCP_HIT in the 
access.log.The question i have is : why the majority of requests can't be 
cached (i have a lot of tcp_miss/200) ? i found that dynamic content is not 
cached but i don't understand.very well.So finally what does this configuration 
do ?refresh_pattern -i (/cgi-bin/|\?) 00%  0refresh_pattern .   
   0   20% 4320Do i have to increase "refresh_pattern -i (/cgi-bin/|\?) 
0  0%  0" to take effects ?Thank you very much.




  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2015-09-30 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Don't do it. Never.

You make most sites broken for your clients.

Dynamic content not ended up by cgi-bin. Caching dynamic content is not
so simple and trivial task.

01.10.15 1:35, Magic Link пишет:
> Hi,i configure squid to use cache. It seems to work because when i did a try 
> with a software's
download, the second download is TCP_HIT in the access.log.The question
i have is : why the majority of requests can't be cached (i have a lot
of tcp_miss/200) ? i found that dynamic content is not cached but i
don't understand.very well.So finally what does this configuration do
?refresh_pattern -i (/cgi-bin/|\?) 00%0refresh_pattern .   
020%4320Do i have to increase "refresh_pattern -i (/cgi-bin/|\?)
00%0" to take effects ?Thank you very much.
>
>
>
>
>   
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWDD/qAAoJENNXIZxhPexG2yEIAI9l1Rx8Mri9zp6rdisIbixS
Xm46h+4+Fxm9aZUAPwcWq3UqFqsuKYlUCXLasjCzOga6nWCuvzESzRxas1kyaKBZ
T0vTW8mIcKYjgxbiEE4CX3DAVDac0QoWXa2tTV6HblrTg3WQCzd/FQBo3optPvoO
0roRalmhHyRc6ecUYgChzRe2dkXlfn+ZYnpyXX7UanQoJA5I/z1PvWF+T2T9YXH8
hhgo+DkWkKOKeEvlxKwJTpxV8pYt1l/ezR5ROIuSFKHXrDkqst2BgP1OON+vEo2e
gY2dIQ9EDa7uyAjNLq7lAq3zJyrbCh/fwUkKTAkrQDaSM8uSlCsnj2i7a2H/LmA=
=hGWQ
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid cache

2015-09-30 Thread Leonardo Rodrigues

Em 30/09/15 16:35, Magic Link escreveu:

Hi,

i configure squid to use cache. It seems to work because when i did a 
try with a software's download, the second download is TCP_HIT in the 
access.log.
The question i have is : why the majority of requests can't be cached 
(i have a lot of tcp_miss/200) ? i found that dynamic content is not 
cached but i don't understand.very well.




That's the way internet works ... most of the traffic is 
dinamically generated, which in default squid configurations avoid the 
content to be cached. Nowadays, with the 'everything https' taking 
place, HTTPS is also non-cacheable (in default configurations).


And by default configurations, you must understand that they are 
the 'SECURE' configuration. Tweaking with refresh_pattern is usually not 
recommended except in some specific cases in which you're completly 
clear that you're violating the HTTP protocol and can have problems with 
that.


In short, the days of 20-30% byte-hits are gone and will never came 
back anymore.


Keep your default (and secure) squid configuration, there's no need 
to tweak refresh_pattern unless on very specific situations that you 
clearly understand what you're doing.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   >