Re: [squid-users] Squid memory consumption problem

2020-06-17 Thread Eliezer Croitoru
Since you are using ssl-bump, you would need to run it manually from CLI as squid user and see what happens. You will need to reinitialize the certificate directory and test at again. Take a peek at:https://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit#Create_and_initialize_TLS_certificates_cache_directory Eliezer Eliezer CroitoruTech SupportMobile: +972-5-28704261Email: ngtech1...@gmail.com From: DIXIT AnkitSent: Tuesday, June 16, 2020 11:23 AMTo: Eliezer Croitoru; Alex Rousskov; squid-users@lists.squid-cache.orgCc: UPADHYAY Neeraj; SETHI Konica; DWIVEDI Gaurav KumarSubject: RE: [squid-users] Squid memory consumption problem Croitoru, Dependencies are resolved and squid installed successfully but during squid process start, we are getting below error. Screen shot also attached. Please suggest. FATAL: The /usr/lib64/squid/security_file_certgen -s /var/spool/squid/ssl_db -M 4MB helpers are crashi...eed help! Regards,Ankit Dixit|IS Cloud TeamEurostar International LtdTimes House | Bravingtons Walk | London N1 9AWOffice: +44 (0)207 84 35550 (Extension– 35530) From: Eliezer Croitoru  Sent: Thursday, June 11, 2020 7:18 PMTo: DIXIT Ankit ; Alex Rousskov ; squid-users@lists.squid-cache.orgCc: UPADHYAY Neeraj ; SETHI Konica ; DWIVEDI Gaurav Kumar Subject: RE: [squid-users] Squid memory consumption problem  Hey, What you should do is:yum localinstall squid-helpers-4.12-1.amzn2.x86_64.rpm perl-Crypt-OpenSSL-X509-0.1-1.amzn2.noarch.rpm and it should resolve the dependencies automatically.  Eliezer Eliezer CroitoruTech SupportMobile: +972-5-28704261Email: ngtech1...@gmail.com From: DIXIT AnkitSent: Thursday, June 11, 2020 7:49 PMTo: Eliezer Croitoru; Alex Rousskov; squid-users@lists.squid-cache.orgCc: UPADHYAY Neeraj; SETHI Konica; DWIVEDI Gaurav KumarSubject: RE: [squid-users] Squid memory consumption problem Croitoru, When I am installing these packages, its not able to resolve dependencies. Below is the error. [root@ squid_package]# rpm -ivh squid-helpers-4.12-1.amzn2.x86_64.rpmerror: Failed dependencies:    perl(DBI) is needed by squid-helpers-7:4.12-1.amzn2.x86_64    perl(Data::Dumper) is needed by squid-helpers-7:4.12-1.amzn2.x86_64    perl(Digest::MD5) is needed by squid-helpers-7:4.12-1.amzn2.x86_64    perl(Digest::SHA) is needed by squid-helpers-7:4.12-1.amzn2.x86_64    perl(URI::URL) is needed by squid-helpers-7:4.12-1.amzn2.x86_64 Before above error, I had to resolve one more dependency by installing below lip package. yum install libtool-ltdl-2.4.2-22.2.amzn2.0.2.x86_64 Please suggest. Regards,Ankit Dixit|IS Cloud TeamEurostar International LtdTimes House | Bravingtons Walk | London N1 9AWOffice: +44 (0)207 84 35550 (Extension– 35530) From: Eliezer Croitoru  Sent: Wednesday, June 10, 2020 1:14 PMTo: DIXIT Ankit ; Alex Rousskov ; squid-users@lists.squid-cache.orgCc: UPADHYAY Neeraj ; SETHI Konica ; DWIVEDI Gaurav Kumar Subject: RE: [squid-users] Squid memory consumption problem  Hey, Squid 4 is tested on Amazon Linux 2.I have tested it in the lat year.. and I believe that there is no reason to run a full set of tests now.  Squid 4 only needs a basic “dry-run” to make sure what you need to change in your squid.conf. I do suggest to first run it whiteout any cache-dir for maybe an hour. Do not go back to CentOS 7. To my knowledge Amazon Linux 2 receives a better over-all support then CentOS 7. From my tests in the past Amazon Linux 2 is faster and is a LTS distribution so you will have someone to contact in any case of trouble.Compared to CentOS X which you will be in the grace of your own efforts and “google foo”. I believe that you would not want to find your self in a situation which you try to contact upstream RH for help. Feel free to send me or the mailing lists any email you want.I will try to at-least read them if I cannot respond on the spot. Eliezer Eliezer CroitoruTech SupportMobile: +972-5-28704261Email: ngtech1...@gmail.com From: DIXIT AnkitSent: Wednesday, June 10, 2020 2:56 PMTo: Eliezer Croitoru; Alex Rousskov; squid-users@lists.squid-cache.orgCc: UPADHYAY Neeraj; SETHI Konica; DWIVEDI Gaurav KumarSubject: RE: [squid-users] Squid memory consumption problem Hi, Thanks for providing the rpm information but I was having some questions as per my last email.==Does it mean, Squid 4 is not tested on Amazon Linux 2, yet? How much time, testing will take? If it will take time , then I am thinking to change the Operating system to Centos 7 from Amazon Linux 2 and then install Squid 4?  Regards,Ankit Dixit|IS Cloud TeamEurostar International LtdTimes House | Bravingtons Walk | London N1 9AWOffice: +44 (0)207 84 35550 (Extension– 35530) From: Eliezer Croitoru  Sent: Wednesday, June 10, 2020 12:26 

Re: [squid-users] Squid and c-icap's srv_url_check module

2020-06-17 Thread Amos Jeffries
On 18/06/20 1:32 am, Amiq Nahas wrote:
> On Wed, Jun 17, 2020 at 10:23 AM Amos Jeffries wrote:
>>
>> On 16/06/20 1:55 am, Amiq Nahas wrote:
>>> Hi Guys,
>>>
>>> I am trying to use the srv_url_check module to block websites.
>>> I have configured squid with proxy authentication and followed this
>>> wiki: https://sourceforge.net/p/c-icap/wiki/UrlCheckProfiles/
>>> to configure c-icap and srv_url_check. Now, I am having trouble
>>> configuring squid.conf. Below I have shared my configuration of squid.
>>>
>>
>>
>> "I am having trouble" is not sufficient details to investigate a problem.
> 
> Sorry my bad.
> So After doing all the above configuration. The browser does not block
> the websites in the blocklist.

If the Browser is doing blocking the request would never reach Squid.


> Browser does prompt for user credentials just like the squid.conf is
> configured to do, but it is not blocking websites.
> 
> However, when I execute c-icap-client from command line it blocks the
> blocklisted websites.
> To check with c-icap-client I have used the below command:
> `c-icap-client -s url_check -x "X-Authenticated-User: dXNlcjE=" -req
> "https://www.facebook.com/; -v`
> 
> So if the blocking from c-icap client is working but blocking from
> browser is not working then

Please define "blocking from c-icap"

Please define "blocking from browser"


> something must be wrong with my squid.conf or the configuration part
> responsible for making c-icap and squid work together, right?

Unknown.

> 
> So what could be it? Please let me know if any other piece of
> information is required, I am not sure what else could be of use.
> 

Log trace(s) from a transaction that you think is failing to start with.
Squid access.log and c-icap log. Maybe a Squid cache.log trace with
debug_options 11,2 or ALL,2


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] print errormessage (like %E in ERR_* pages) in squid logfile ?

2020-06-17 Thread Alex Rousskov
On 6/17/20 9:17 AM, Dieter Bloms wrote:

> more and more clients aren't browser but are programs, which call a
> restapi through our squid proxy.
> 
> Those clients aren't able to show the errorpage (ERR_*) from proxy in
> case the request wasn't successful for any reason.
> 
> I added %err_code and %err_detail, but %err_detail is filled with "-" sign 
> all the
> time in the logfiles.
> 
> For example:
> If the connection to a webserver fails %err_code is filled with 
> ERR_CONNECT_FAIL, but
> %err_detail is filled with "-" instead of the messages "(110) Connection 
> %timed out"
> 
> Is it possible to log the error message like %E in the error pages ?


Hello Dieter,

No, it is not possible to log %E to access.log, and the errno-based
%E itself is often useless in such situations, especially in the
official code where it is often lost or stale.

However, your use case _is_ valid, and Factory is working on improving
detailing of various connectivity problems similar to the ones you are
talking about. There is no official pull request yet, but we are getting
very close. If you would like, please feel free to test-drive our
master/v6-based https://github.com/measurement-factory/squid/pull/63


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID 4.12 (Debian 10, OpenSSL 1.1.1d) - SSL bump no server helllo

2020-06-17 Thread Alex Rousskov
On 6/17/20 9:14 AM, Loučanský Lukáš wrote:

> Just noticed that github version of HandShake.cc is much better "patched"

Squid should have proper support for GREASEd TLS version values (and
more!) since master/v6 commit eec67f0. That very recent change has not
been ported to earlier Squid versions yet.

https://github.com/squid-cache/squid/commit/eec67f0

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and c-icap's srv_url_check module

2020-06-17 Thread Amiq Nahas
On Wed, Jun 17, 2020 at 10:23 AM Amos Jeffries  wrote:
>
> On 16/06/20 1:55 am, Amiq Nahas wrote:
> > Hi Guys,
> >
> > I am trying to use the srv_url_check module to block websites.
> > I have configured squid with proxy authentication and followed this
> > wiki: https://sourceforge.net/p/c-icap/wiki/UrlCheckProfiles/
> > to configure c-icap and srv_url_check. Now, I am having trouble
> > configuring squid.conf. Below I have shared my configuration of squid.
> >
>
>
> "I am having trouble" is not sufficient details to investigate a problem.

Sorry my bad.
So After doing all the above configuration. The browser does not block
the websites in the blocklist.
Browser does prompt for user credentials just like the squid.conf is
configured to do, but it is not blocking websites.

However, when I execute c-icap-client from command line it blocks the
blocklisted websites.
To check with c-icap-client I have used the below command:
`c-icap-client -s url_check -x "X-Authenticated-User: dXNlcjE=" -req
"https://www.facebook.com/; -v`

So if the blocking from c-icap client is working but blocking from
browser is not working then
something must be wrong with my squid.conf or the configuration part
responsible for making c-icap and squid work together, right?

So what could be it? Please let me know if any other piece of
information is required, I am not sure what else could be of use.

Thanks
Amiq
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] print errormessage (like %E in ERR_* pages) in squid logfile ?

2020-06-17 Thread Dieter Bloms
Hello,

more and more clients aren't browser but are programs, which call a
restapi through our squid proxy.

Those clients aren't able to show the errorpage (ERR_*) from proxy in
case the request wasn't successful for any reason.

I added %err_code and %err_detail, but %err_detail is filled with "-" sign all 
the
time in the logfiles.

For example:
If the connection to a webserver fails %err_code is filled with 
ERR_CONNECT_FAIL, but
%err_detail is filled with "-" instead of the messages "(110) Connection %timed 
out"

Is it possible to log the error message like %E in the error pages ?

Thank you very much.


-- 
Regards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID 4.12 (Debian 10, OpenSSL 1.1.1d) - SSL bump no server helllo

2020-06-17 Thread Loučanský Lukáš
 
Just noticed that github version of HandShake.cc is much better "patched" than 
my humble,pitty attempt to quick-fix the parser. So in the light of self 
investigation and the lack of information and experience (I'm sorry for that) I 
maybe over-reacted. But now it seems both modifications made it working 
(recompiled with github version of Handshake.cc). As I have no idea what to 
expect from GREASEd connections - I think it would be better - as I've already 
wrote - to check and sanitize every TLS header input... (version 4.12 from 9th 
June doesn't do that)

LL
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID 4.12 (Debian 10, OpenSSL 1.1.1d) - SSL bump no server helllo

2020-06-17 Thread Loučanský Lukáš
This is the most naïve and dirtiest effort but I don't know where else it's 
called - not going to check it and fix calling it with nonsense numbers - so I 
went like this:

/// parse TLS ProtocolVersion (uint16) and convert it to AnyP::ProtocolVersion
static AnyP::ProtocolVersion
ParseProtocolVersion(Parser::BinaryTokenizer , const char *contextLabel = 
".version")
{
Parser::BinaryTokenizerContext context(tk, contextLabel);
uint8_t vMajor = tk.uint8(".major");
uint8_t vMinor = tk.uint8(".minor");

if (vMajor>3)  return AnyP::ProtocolVersion(AnyP::PROTO_TLS, 1, 0);

if (vMajor == 0 && vMinor == 2)
return AnyP::ProtocolVersion(AnyP::PROTO_SSL, 2, 0);

Must(vMajor == 3);
if (vMinor == 0)
return AnyP::ProtocolVersion(AnyP::PROTO_SSL, 3, 0);

return AnyP::ProtocolVersion(AnyP::PROTO_TLS, 1, (vMinor - 1));
}


So - if someone tries to fool us with random numbers - rule it out as TLS 1.0. 
I know it deserves more - this code does what it is not mean to be doing etc. 
etc. (for every version >3 returns something) But:

2020/06/17 13:02:12.978 kid2| 24,7| BinaryTokenizer.cc(65) got: 
Extension.type=43 occupying 2 bytes @164 in 0x7ffcd4777170.
2020/06/17 13:02:12.978 kid2| 24,7| BinaryTokenizer.cc(65) got: 
Extension.data.length=11 occupying 2 bytes @166 in 0x7ffcd4777170.
2020/06/17 13:02:12.978 kid2| 24,8| SBuf.cc(38) SBuf: SBuf15611 created from id 
SBuf15576
2020/06/17 13:02:12.978 kid2| 24,7| BinaryTokenizer.cc(74) got: 
Extension.data.octets= 0a7a7a0304030303020301 occupying 11 bytes @168 in 
0x7ffcd4777170.
2020/06/17 13:02:12.978 kid2| 24,8| SBuf.cc(70) ~SBuf: SBuf15611 destructed
2020/06/17 13:02:12.978 kid2| 24,7| BinaryTokenizer.cc(57) got: Extension 
occupying 15 bytes @164 in 0x7ffcd4777170.
2020/06/17 13:02:12.978 kid2| 24,8| SBuf.cc(38) SBuf: SBuf15612 created from id 
SBuf15610
2020/06/17 13:02:12.978 kid2| 24,7| BinaryTokenizer.cc(65) got: 
SupportedVersions.length=10 occupying 1 bytes @0 in 0x7ffcd4776fd0.
2020/06/17 13:02:12.978 kid2| 24,8| SBuf.cc(38) SBuf: SBuf15613 created from id 
SBuf15612
2020/06/17 13:02:12.978 kid2| 24,7| BinaryTokenizer.cc(74) got: 
SupportedVersions.octets= 7a7a0304030303020301 occupying 10 bytes @1 in 
0x7ffcd4776fd0.
2020/06/17 13:02:12.979 kid2| 24,8| SBuf.cc(38) SBuf: SBuf15614 created from id 
SBuf15613
2020/06/17 13:02:12.979 kid2| 24,8| SBuf.cc(70) ~SBuf: SBuf15613 destructed
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=122 occupying 1 bytes @0 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=122 occupying 1 bytes @1 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=3 occupying 1 bytes @2 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=4 occupying 1 bytes @3 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=3 occupying 1 bytes @4 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=3 occupying 1 bytes @5 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=3 occupying 1 bytes @6 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=2 occupying 1 bytes @7 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=3 occupying 1 bytes @8 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=1 occupying 1 bytes @9 in 0x7ffcd4777010.
2020/06/17 13:02:12.979 kid2| 24,8| SBuf.cc(70) ~SBuf: SBuf15614 destructed
2020/06/17 13:02:12.979 kid2| 24,8| SBuf.cc(70) ~SBuf: SBuf15612 destructed
2020/06/17 13:02:12.979 kid2| 83,7| Handshake.cc(594) 
parseSupportedVersionsExtension: found TLS/1.3
2020/06/17 13:02:12.979 kid2| 24,8| SBuf.cc(70) ~SBuf: SBuf15610 destructed

Note 7a7a0304030303020301, 0x7A = 122

I think fixing it everywhere would involve BinaryTokenizing extension string 
(like  tkVersions) and check every value sent to  ParseProtocolVersion. In the 
HandShake.cc file on about six occassions. It seems very likely that *some* 
vendors will send nonsense values to the other parts as well. So it would be 
nice to have them all sanitized. For me it looks like Google initiative - but I 
could be wrong. Anyway - what seemed to be problem with TLS on my box now seems 
to be problem with additive, random numbers in the supported versions string - 
waiting for someone to investigate it further...

LL

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID 4.12 (Debian 10, OpenSSL 1.1.1d) - SSL bump no server helllo

2020-06-17 Thread Loučanský Lukáš
Found this:

2020/06/17 08:06:31.292 kid2| 24,7| BinaryTokenizer.cc(74) got: 
SupportedVersions.octets= caca0304030303020301 occupying 10 bytes @1 in 
0x7ffd9ba4a0b0.
0x0301 - 0x0304 -> TLS versions to TLS1.3

0xcaca = non-existent

(a few lines further:)
BinaryTokenizer.cc(65) got: supported_version.major=202 occupying 1 bytes @0 in 
0x7ffd9ba4a0f0. 

Note 0xCA = 202 dec

Another examples:
2020/06/17 08:06:31.312 kid1| 24,7| BinaryTokenizer.cc(74) got: 
SupportedVersions.octets= 3a3a0304030303020301 occupying 10 bytes @1 in 
0x7ffe348a1f30.

2020/06/17 08:06:31.312 kid1| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=58 occupying 1 bytes @0 in 0x7ffe348a1f70.

Note 0x3A = 58 dec

2020/06/17 08:06:31.324 kid1| 24,7| BinaryTokenizer.cc(74) got: 
SupportedVersions.octets= 0304030303020301 occupying 10 bytes @1 in 
0x7ffe348a1f30.
2020/06/17 08:06:31.324 kid1| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=170 occupying 1 bytes @1 in 0x7ffe348a1f70.

Note 0xAA = 170 dec


So - I think this is a) badly pased string in /parser/BinaryTokenizer.cc (not 
likely), or b) in /security/HandShake.cc (line 526 and beyond)
Security::HandShakeParser does not ignore obviously nonse version

What I see is - that it calls tokenizer to get tkVersions, then asks 
ParseProtocolVersion to check it. I think that code ParseProtocolVersion checks 
for version 0.2 OR expects version 3.x - but gets versions 202 or 58 etc.

It seems logical to my limited knowledge to check for and ignore uknown 
versions (GREASed). I think this is the while statement involved

while (!tkVersions.atEnd()) {
const auto version = ParseProtocolVersion(tkVersions, 
"supported_version");
if (!supportedVersionMax || 
TlsVersionEarlierThan(supportedVersionMax, version))
supportedVersionMax = version;
}

It calls parser - according to
2020/06/17 08:06:31.312 kid1| 0,3| Handshake.cc(119) ParseProtocolVersion: 
check failed: vMajor == 3exception location: Handshake.cc(119) 
ParseProtocolVersion

It fails while calling it - so the check must be before calling 
ParseProtocolVersion or while in it - there is statement Must(vMajor==3) on 
line 119 - so I think this is the breakpoint call. Would simple if (vMajor <= 
3)... Statement be  sufficient? What value it should return in case of 
non-parsable version? Sure not any value or some arbitrary value such as 
TLS1.something or SSLv3 ... It goes through SSLv2 to SSLv3 (implies vMajor = 3) 
and for versions >3.0 returns TLS1.vMinor-1 (???). So what it should do if it's 
called with version 0xCACA or 0x3A3A - I think that there should be check in 
the mentioned while statement - but it involves parsing major and minor 
version. This already does ParseProtocolVersion. But I think the goal of this 
is to find the max supported TLS version - so it should not fail on 
non-existent versions. So I think the mentioned while statement should sort 
this out, not calling parser to ask for TLS version for "random" numbers.

LL



-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Loučanský Lukáš
Sent: Wednesday, June 17, 2020 9:11 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID 4.12 (Debian 10,OpenSSL 1.1.1d) - SSL bump no 
server helllo


> That is somewhat useful. TLS version being received is not valid.

Ok - although this is squid users phorum - this could be even more useful:

Firefox - http://download.kjj.cz/pub/ssl/firefox.txt it goes throught 
everything to the GET / HTTP/1.1 request

Chrome - http://download.kjj.cz/pub/ssl/chrome.txt - it goes from
2020/06/17 08:06:31.292 kid1| 93,7| HttpRequest.cc(63) ~HttpRequest: 
destructed, this=0x55e730f38e50
2020/06/17 08:06:31.292 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=202 occupying 1 bytes @0 in 0x7ffd9ba4a0f0.
2020/06/17 08:06:31.292 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf71215602 destructed
2020/06/17 08:06:31.292 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=202 occupying 1 bytes @1 in 0x7ffd9ba4a0f0.
2020/06/17 08:06:31.292 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf71215601 destructed

to
2020/06/17 08:06:31.292 kid2| 0,3| Handshake.cc(119) ParseProtocolVersion: 
check failed: vMajor == 3
exception location: Handshake.cc(119) ParseProtocolVersion

It is not working in all chrome based browsers - Edge, Opera... It is working 
in the MSIE and FF

LL

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID 4.12 (Debian 10, OpenSSL 1.1.1d) - SSL bump no server helllo

2020-06-17 Thread Loučanský Lukáš

> That is somewhat useful. TLS version being received is not valid.

Ok - although this is squid users phorum - this could be even more useful:

Firefox - http://download.kjj.cz/pub/ssl/firefox.txt it goes throught 
everything to the GET / HTTP/1.1 request

Chrome - http://download.kjj.cz/pub/ssl/chrome.txt - it goes from
2020/06/17 08:06:31.292 kid1| 93,7| HttpRequest.cc(63) ~HttpRequest: 
destructed, this=0x55e730f38e50
2020/06/17 08:06:31.292 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.major=202 occupying 1 bytes @0 in 0x7ffd9ba4a0f0.
2020/06/17 08:06:31.292 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf71215602 destructed
2020/06/17 08:06:31.292 kid2| 24,7| BinaryTokenizer.cc(65) got: 
supported_version.minor=202 occupying 1 bytes @1 in 0x7ffd9ba4a0f0.
2020/06/17 08:06:31.292 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf71215601 destructed

to
2020/06/17 08:06:31.292 kid2| 0,3| Handshake.cc(119) ParseProtocolVersion: 
check failed: vMajor == 3
exception location: Handshake.cc(119) ParseProtocolVersion

It is not working in all chrome based browsers - Edge, Opera... It is working 
in the MSIE and FF

LL

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Switch cache peer Parent server for every 30 minutes

2020-06-17 Thread Prem Chand
Hi Alex,

Could you please share with me a rough sketch example for the below
statement.
"but I suspect that a clever
combination of annotate_transaction and "note" ACLs in cache_peer_access
rules can be used to force a particular cache peer selection order."

On Mon, Jun 15, 2020 at 7:14 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 6/15/20 3:26 AM, Prem Chand wrote:
>
> > I stopped the peerA(purposefully)  and noticed that requests are failing
> > for the time slots that are going through peerA. I used
> > "connect-fail-limit" in cache_peer  but it's not working. Is there any
> > way we can address this issue using the same solution considering how to
> > handle the requests if any of the parent  peer goes down?
>
> I am not sure, but I think it should be possible to always give Squid
> three peers to use, in the right order. There is no peer selection
> algorithm that will do that automatically, but I suspect that a clever
> combination of annotate_transaction and "note" ACLs in cache_peer_access
> rules can be used to force a particular cache peer selection order.
>
> https://wiki.squid-cache.org/Features/LoadBalance#Go_through_a_peer
>
> The trick is to place one "best" peer into the first group (your rules
> already do that!), but then stop banning peers so that the other two
> peers are added to the "All Alive Parents" group (your rules currently
> deny those two peers from being considered). It may be possible to stop
> banning peers while the peer selection code is running its second pass
> by changing request annotation.
>
> I am sorry that I do not have enough time to sketch an example.
>
> Alex.
>
>
>
> > On Fri, Jun 12, 2020 at 6:47 PM Alex Rousskov wrote:
> >
> > On 6/11/20 11:52 PM, Prem Chand wrote:
> >
> > > It's working as expected. I tried to allow only specific domains
> > during
> > > the time by adding below acl but I'm getting HTTP status code 503
> >
> > > acl usePeerB time 00:30-00:59
> > > acl usePeerB time 02:00-02:29
> > > acl alloweddomains dstdomain google.com 
> > facebook.com 
> >
> > > cache_peer_access peerA allow usePeerA allowedomains
> > > cache_peer_access peerB allow usePeerB allowedomains
> > > cache_peer_access peerC allow !usePeerA !userPeerB alloweddomains
> >
> > Assuming there are no other cache peers, the above rules leave no
> > forwarding path for a request to a banned domain. If you want to ban
> > such requests, http_access instead of cache_peer_access.
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> > > On Thu, Jun 11, 2020 at 4:54 AM Alex Rousskov wrote:
> > >
> > > On 6/10/20 12:20 PM, Antony Stone wrote:
> > > > On Wednesday 10 June 2020 at 18:11:03, Prem Chand wrote:
> > > >
> > > >> Hi Alex,
> > > >>
> > > >> Thanks for responding to my issue  . I didn't get how the
> math
> > > was done(why
> > > >> it's multiplied by 2) to get 16 slots if possible could you
> > > please elaborate
> > > >> with an example.
> > > >
> > > > I believe what Alex meant was:
> > > >
> > > > You want 30 minute timeslots for each of 3 peers, which is 48
> > > half-hour
> > > > timeslots throughout the day.
> > > >
> > > > However, you only need to define 48/3 of these for peer A,
> and
> > > 48/3 of them for
> > > > peer B, and then let peer C deal with anything not already
> > handled
> > > (so it
> > > > doesn't need its own definitions).
> > > >
> > > > 48/3 = 16, therefore you define 16 half-hour periods when
> > you want
> > > peer A to do
> > > > the work, 16 half-hour periods for peer B, and then just say
> > "peer
> > > C, handle
> > > > anything left over".
> > >
> > > Thank you, Antony! Here is an untested sketch:
> > >
> > >   acl usePeerA time 00:00-00:29
> > >   acl usePeerA time 01:30-01:59
> > >   ... a total of 16 ORed lines for the first peer ...
> > >   ... each line matches a unique 30 minute period ...
> > >
> > >
> > >   acl usePeerB time 00:30-00:59
> > >   acl usePeerB time 02:00-02:29
> > >   ... a total of 16 ORed lines for the second peer ...
> > >   ... each line matches a unique 30 minute period ...
> > >
> > >   # and now match peer to its time slots
> > >   cache_peer_access peerA allow usePeerA
> > >   cache_peer_access peerB allow usePeerB
> > >   cache_peer_access peerC allow !usePeerA !userPeerB
> > >
> > >
> > > The above may need further adjustments and polishing. For
> > example, I am
> > > not sure how Squid will round these time values. The above
> > assumes that
> > > 00:29 limit includes all 60 seconds up to (but