Re: [squid-users] Simulate connections for tuning squid?

2024-06-16 Thread David Touzeau


Hi

We have made such a tool for us.
I suggest downloading our ISO and install a new server ( virtual )
You will have this feature:
https://wiki.articatech.com/en/proxy-service/tuning/stress-your-proxy-server

You can easily use this feature in a variety of scenarios.

Available free of charge with no restrictions



Le 24/05/2024 à 16:01, Alex Rousskov a écrit :

On 2024-05-24 01:43, Periko Support wrote:


I would like to know if there exists a tool that helps us simulate
connections to squid and helps us tune squid for different scenarios
like small, medium or large networks?


Yes, there are many tools, offering various tradeoffs, including:

* Apache "ab": Not designed for testing proxies but well-known and 
fairly simple.


* Web Polygraph: Designed for testing proxies but has a steep learning 
curve and lacks fresh releases.


* curl/wget/netcat: Not designed for testing performance but 
well-known and very simple.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as an education tool

2024-02-12 Thread David Touzeau
being used as a policy enforcer 
rather than an education tool.
I believe in education as one of the top priorities compared to enforcing 
policies.
The nature of policies depends on the environment and the risks but eventually 
understanding the meaning of the policy
gives a lot to the cooperation of the user or an employee.

I have yet to see a solution like the next:
Each user has a profile/user which when receiving a policy block will be 
prompted with an option to allow temporarily
the specific site or domain.
Also, I have not seen an implementation which allows the user to disable or 
lower the policy strictness for a short period of time.

I am looking for such implementations if those exist already to run education 
sessions with teenagers.

Thanks,
Eliezer

___
squid-users mailing list
mailto:squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
mailto:squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users





--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread David Touzeau

Thank Alex

This will fix the issue!

Le 31/01/2024 à 17:43, Alex Rousskov a écrit :

On 2024-01-31 09:23, David Touzeau wrote:


Hi %note is used by our external_acls and for log other tokens
And we use also Group as token.
it can disabled by direcly removing source kerberos code before 
compiling but i would like to know if there is another way


In most cases, one does not have to (and does not really want to) log 
_all_ transaction annotations. It is possible to specify annotations 
that should be logged by using the annotation name as a %note parameter.


For example, to just log annotation named foo, use %note{foo} instead 
of %note.


In many cases, folks that log multiple annotations, prepend the 
annotation name so that it is easier (especially for humans) to 
extract the right annotation from the access log record:


    ... foo=%note{foo} bar=%note{bar} ...


HTH,

Alex.



Le 31/01/2024 à 14:36, Andrey K a écrit :

Hello, David,

> Anyway to remove these entries from the log ?
I think you should correct logformat directive in your squid 
configuration to disable annotations logging (%note): 
http://www.squid-cache.org/Doc/config/logformat/


Kind regards,
      Ankor.





ср, 31 янв. 2024 г. в 15:51, David Touzeau :

    Anyway to remove these entries from the log ?

    Le 31/01/2024 à 10:01, Andrey K a écrit :

    Hello, David,

    group values in your logs are BASE64-encoded binary AD-groups 
SIDs.

    You can try to decode them by a simple perl script sid-reader.pl
<http://sid-reader.pl> (see below):

    echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl
    sid-reader.pl <http://sid-reader.pl>

    And finally convert SID to a group name:
    wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

    Kind regards,
          Ankor


    *sid-reader.pl <http://sid-reader.pl>:*
    #!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

    my $binary_sid;
    my @parts;
    while(<>){
      push @parts, $_;
    }
      $binary_sid = join('', @parts);

      my($sid_rev, $num_auths, $id1, $id2, @ids) =
                    unpack("H2 H2 n N V*", $binary_sid);
      my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, 
@ids);

      print "$sid_string\n";


    вт, 30 янв. 2024 г. в 18:49, David Touzeau :


    Hi when using Kerberos with Squid when in access log a long
    Group tags:

    I would like to know how to disable Squid to grab groups
    suring authentication verification and in other way, how to
    decode Group value

    example of an access.log

    |1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443
    CONNECT eu-mobile.events.data.microsoft.com:443
<http://eu-mobile.events.data.microsoft.com:443> leblud
    HIER_DIRECT/13.69.239.72:443 <http://13.69.239.72:443> -
    mac="00:00:00:00:00:00"
user:%20leblud%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBsMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBa==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESj34AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQbcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlPQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNZUAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/MMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESh5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuc4AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESl8QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGnsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESihgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESnsEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8QYBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNtcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESX+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8KMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShMcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0XgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMwIBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQSUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESAQIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESufYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNAkBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESccMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEStdYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFXkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESb6EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFc==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESluoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESxY8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2cEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJ5wAAA==%0D%0Agroup:%20AQUAAAU

Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread David Touzeau





Hi %note is used by our external_acls and for log other tokens
And we use also Group as token.
it can disabled by direcly removing source kerberos code before 
compiling but i would like to know if there is another way


Le 31/01/2024 à 14:36, Andrey K a écrit :

Hello, David,

> Anyway to remove these entries from the log ?
I think you should correct logformat directive in your squid 
configuration to disable annotations logging (%note): 
http://www.squid-cache.org/Doc/config/logformat/


Kind regards,
      Ankor.





ср, 31 янв. 2024 г. в 15:51, David Touzeau :

Anyway to remove these entries from the log ?

Le 31/01/2024 à 10:01, Andrey K a écrit :

Hello, David,

group values in your logs are BASE64-encoded binary AD-groups SIDs.
You can try to decode them by a simple perl script sid-reader.pl
<http://sid-reader.pl> (see below):

echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl
sid-reader.pl <http://sid-reader.pl>

And finally convert SID to a group name:
wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

Kind regards,
      Ankor


*sid-reader.pl <http://sid-reader.pl>:*
#!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

my $binary_sid;
my @parts;
while(<>){
  push @parts, $_;
}
  $binary_sid = join('', @parts);

  my($sid_rev, $num_auths, $id1, $id2, @ids) =
                unpack("H2 H2 n N V*", $binary_sid);
  my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, @ids);
      print "$sid_string\n";


вт, 30 янв. 2024 г. в 18:49, David Touzeau :


Hi when using Kerberos with Squid when in access log a long
Group tags:

I would like to know how to disable Squid to grab groups
suring authentication verification and in other way, how to
decode Group value

example of an access.log

|1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443
CONNECT eu-mobile.events.data.microsoft.com:443
<http://eu-mobile.events.data.microsoft.com:443> leblud
HIER_DIRECT/13.69.239.72:443 <http://13.69.239.72:443> -
mac="00:00:00:00:00:00"

user:%20leblud%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBsMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBa==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESj34AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQbcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlPQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNZUAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/MMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESh5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuc4AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESl8QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGnsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESihgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESnsEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8QYBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNtcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESX+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8KMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShMcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0XgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMwIBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQSUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESAQIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESufYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNAkBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESccMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEStdYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFXkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESb6EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFc==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESluoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESxY8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2cEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJ5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEST/MAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESLaEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlvQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESPLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES98IAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShPgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaHsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESmegAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESiRgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/tgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES5IEAAA==%0D%0Agroup:%20AQUAAAU

Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread David Touzeau

Anyway to remove these entries from the log ?

Le 31/01/2024 à 10:01, Andrey K a écrit :

Hello, David,

group values in your logs are BASE64-encoded binary AD-groups SIDs.
You can try to decode them by a simple perl script sid-reader.pl 
<http://sid-reader.pl> (see below):


echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl 
sid-reader.pl <http://sid-reader.pl>


And finally convert SID to a group name:
wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

Kind regards,
      Ankor


*sid-reader.pl <http://sid-reader.pl>:*
#!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

my $binary_sid;
my @parts;
while(<>){
  push @parts, $_;
}
  $binary_sid = join('', @parts);

  my($sid_rev, $num_auths, $id1, $id2, @ids) =
                unpack("H2 H2 n N V*", $binary_sid);
  my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, @ids);
  print "$sid_string\n";


вт, 30 янв. 2024 г. в 18:49, David Touzeau :


Hi when using Kerberos with Squid when in access log a long Group
tags:

I would like to know how to disable Squid to grab groups suring
authentication verification and in other way, how to decode Group
value

example of an access.log

|1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443 CONNECT
eu-mobile.events.data.microsoft.com:443
<http://eu-mobile.events.data.microsoft.com:443> leblud
HIER_DIRECT/13.69.239.72:443 <http://13.69.239.72:443> -
mac="00:00:00:00:00:00"

user:%20leblud%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBsMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBa==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESj34AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQbcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlPQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNZUAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/MMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESh5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuc4AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESl8QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGnsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESihgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESnsEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8QYBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNtcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESX+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8KMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShMcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0XgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMwIBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQSUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESAQIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESufYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNAkBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESccMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEStdYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFXkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESb6EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFc==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESluoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESxY8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2cEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJ5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEST/MAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESLaEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlvQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESPLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES98IAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShPgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaHsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESmegAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESiRgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/tgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES5IEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESN9cAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESbQEBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESjZwAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESmsQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESvtIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGAEBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESePYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESfp0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuj0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESA8gAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES7p8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQu==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESZ50AAA==%0D%0Agroup:%20AQUAAAUVAA

[squid-users] Long Group TAG in access.log when using kerberos

2024-01-30 Thread David Touzeau
0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESZ3sAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESTvMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES3HgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJdkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES5YcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES6AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESd/YAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESUsQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESz3gAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMLEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESP+==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESk/QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESTfoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESixgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShccAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESVwoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQuwAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESA9==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQcMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0QUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQO==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESu5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESYcIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESE9MAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES7oQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES9YQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES9oQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESd5EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES84QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8oQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES74QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESgHsAAA==%0D%0Agroup:%20AQEAABIB%0D%0Aaccessrule:%20final_allow%0D%0Afirst:%20ERROR%0D%0Awebfilter:%20pass%0D%0Aexterr:%20invalid_code_431%0D%0A 
ua="-" exterr="-|-"|


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to start Squid 6.3 "earlyMessages->size() < 1000"

2023-10-02 Thread David Touzeau

Thank you, you've enlightened me;
I had the GlobalWhitelistDSTNet directive declared twice in two 
different includes
This meant that an identical Acl declared in two different places would 
contradict each other on the same addresses and generate mass warnings.


On 02/10/2023 22:01, Alex Rousskov wrote:



Since Squid 6.x we have this strange behavior on acl dst
Many warnings is generated

2023/10/02 20:18:50| WARNING: You should probably remove 
'64.34.72.226' from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.226' is a subnetwork of 
(A) '64.34.72.226'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.226' is 
ignored to keep splay tree searching predictable


(B) '*64.34.72.226*' is a subnetwork of (A) '*64.34.72.226*' --> 
Sure, this is the IP address.


Is it possible that you have two 64.34.72.226 entries in that 
GlobalWhitelistDSTNet ACL? Perhaps in another included configuration 
file or something like that?



You should probably remove '64.34.72.226' from the ACL named 
'GlobalWhitelistDSTNet' --> Why this is only the IP address in the 
acl ???


Squid thinks that there is more than one copy of 64.34.72.226 address 
in GlobalWhitelistDSTNet ACL. It could be Squid bug, of course. Please 
share a configuration that reproduces the issue or a pointer to 
compressed "squid -N -X -d9 ..." output while reproducing the problem.



2023/10/02 20:20:09| FATAL: assertion failed: debug.cc:606: 
"earlyMessages->size() < 1000"

Aborted


This assert is a side effect of the above ACL problem/bug - you 
probably have many IPs in that ACL and the corresponding WARNINGs 
exceed Squid hard-coded message accumulation limit. Now that we know 
how a broken(*) configuration can produce so many early cache.log 
messages, we should probably modify Squid to quit without asserting, 
but let's focus on the root cause of your problems -- those WARNING 
messages.


(*) I am not implying that _your_ configuration is broken.


Cheers,

Alex.


2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of 
(A) '64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is 
ignored to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove 
'64.34.72.230' from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of 
(A) '64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is 
ignored to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove 
'64.34.72.230' from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.232' is a subnetwork of 
(A) '64.34.72.232'


According to all warning, Squid won't start with this error

*2023/10/02 20:20:09| FATAL: assertion failed: debug.cc:606: 
"earlyMessages->size() < 1000"**

**Aborted*

How to avoid this ??

--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Unable to start Squid 6.3 "earlyMessages->size() < 1000"

2023-10-02 Thread David Touzeau


Hi

Since Squid 6.x we have this strange behavior on acl dst
Many warnings is generated

2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.226' 
from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.226' is a subnetwork of (A) 
'64.34.72.226'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.226' is ignored 
to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.226' 
from the ACL named 'GlobalWhitelistDSTNet'



(B) '*64.34.72.226*' is a subnetwork of (A) '*64.34.72.226*' --> Sure, 
this is the IP address.


You should probably remove '64.34.72.226' from the ACL named 
'GlobalWhitelistDSTNet' --> Why this is only the IP address in the acl ???



2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of (A) 
'64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is ignored 
to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.230' 
from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of (A) 
'64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is ignored 
to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.230' 
from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.232' is a subnetwork of (A) 
'64.34.72.232'


According to all warning, Squid won't start with this error

*2023/10/02 20:20:09| FATAL: assertion failed: debug.cc:606: 
"earlyMessages->size() < 1000"**

**Aborted*

How to avoid this ??

--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 6.2: Unsupported or unexpected from-helper annotation with a name reserved for Squid use

2023-09-18 Thread David Touzeau

Many thanks Francesco !!


On 17/09/2023 16:55, Francesco Chemolli wrote:

Hi David,
PR 1481 <https://github.com/squid-cache/squid/pull/1481> should 
address your problem, it needs to be reviewed,

merged to trunk, and backported to v6 so don't hold your breath,
but it should be just a matter of time.
Once done, you will also have to add a configuration line to your 
squid.conf (manual 
<http://www.squid-cache.org/Doc/config/cache_log_message/>)


On Mon, Aug 28, 2023 at 10:59 PM Francesco Chemolli 
 wrote:


That's a good question; not right now, unless you're willing to
patch the squid sources.
In that case, just remove the debugs() statement in lines 200-203
of file src/helper/Reply.cc .



On Mon, Aug 28, 2023 at 9:52 PM David Touzeau
 wrote:

Thanks You

As these changes affect many things for us ( use tags for
statistics / elasticsearchs) and it seems, this behavior is
just a warning (seems squid still work as expected like note acls)

Is there a way to remove these warnings because they increase
I/O and cache.log dramatically.

regards

On 28/08/2023 22:46, Francesco Chemolli wrote:

Hi David,
   you should use
itchart_=PASS

The trailing underscore signals Squid that this is a custom
header.

On Mon, Aug 28, 2023 at 3:54 PM David Touzeau
 wrote:


Hi

Since 6.2 ( aka migrating from 5.8 )

Squid claim about token sent by external_acl_helper

the external acl helper send
"OK itchart=PASS user=dtouzeau category=143
category-name=Trackers clog=cinfo:143-Trackers;"

squid claim
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to
add a trailing underscore: itchart_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: category=143
    advice: If this is a custom annotation, rename it to
add a trailing underscore: category_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: category-name=Trackers
    advice: If this is a custom annotation, rename it to
add a trailing underscore: category-name_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: clog=cinfo:143-Trackers;
    advice: If this is a custom annotation, rename it to
add a trailing underscore: clog_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to
add a trailing underscore: itchart_
    current master transaction: master278

Did the helper instead of "itchart=PASS" must send

"itchart_=PASS"
or
"itchart_PASS"

    ?




-- 
David Touzeau - Artica Tech France

Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



-- 
        Francesco


-- 
David Touzeau - Artica Tech France

Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  




-- 
    Francesco




--
    Francesco


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 6.2: Unsupported or unexpected from-helper annotation with a name reserved for Squid use

2023-08-28 Thread David Touzeau

Thanks You

As these changes affect many things for us ( use tags for statistics / 
elasticsearchs) and it seems, this behavior is just a warning (seems 
squid still work as expected like note acls)


Is there a way to remove these warnings because they increase I/O and 
cache.log dramatically.


regards

On 28/08/2023 22:46, Francesco Chemolli wrote:

Hi David,
   you should use
itchart_=PASS

The trailing underscore signals Squid that this is a custom header.

On Mon, Aug 28, 2023 at 3:54 PM David Touzeau  
wrote:



Hi

Since 6.2 ( aka migrating from 5.8 )

Squid claim about token sent by external_acl_helper

the external acl helper send
"OK itchart=PASS user=dtouzeau category=143 category-name=Trackers
clog=cinfo:143-Trackers;"

squid claim
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
itchart=PASS
    advice: If this is a custom annotation, rename it to add a
trailing underscore: itchart_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
category=143
    advice: If this is a custom annotation, rename it to add a
trailing underscore: category_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
category-name=Trackers
    advice: If this is a custom annotation, rename it to add a
trailing underscore: category-name_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
clog=cinfo:143-Trackers;
    advice: If this is a custom annotation, rename it to add a
trailing underscore: clog_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
itchart=PASS
    advice: If this is a custom annotation, rename it to add a
trailing underscore: itchart_
    current master transaction: master278

Did the helper instead of "itchart=PASS" must send

"itchart_=PASS"
or
"itchart_PASS"

?




-- 
David Touzeau - Artica Tech France

Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



--
    Francesco


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] 6.2: Unsupported or unexpected from-helper annotation with a name reserved for Squid use

2023-08-28 Thread David Touzeau


Hi

Since 6.2 ( aka migrating from 5.8 )

Squid claim about token sent by external_acl_helper

the external acl helper send
"OK itchart=PASS user=dtouzeau category=143 category-name=Trackers 
clog=cinfo:143-Trackers;"


squid claim
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: itchart_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: category=143
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: category_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: category-name=Trackers
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: category-name_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: clog=cinfo:143-Trackers;
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: clog_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: itchart_

    current master transaction: master278

Did the helper instead of "itchart=PASS" must send

"itchart_=PASS"
or
"itchart_PASS"

?




--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] %LOGIN place in squid 5.8 acls

2023-04-24 Thread David Touzeau

Thanks Amos for the mistake, yes my explains was wrong.
Your are right, the first object !allowed_domains matches, so squid 
usually compute the second object. This an expected behavior.


According your suggest my problem was the first rule "http_access allow 
noauth_sites" in first place.
yes, it will allow requests but, requests will be allowed for all other 
rules too.

It make sense, why compute all others rules if the first one is allowed ?

if a add office365.com in noauth_sites object but i did not want 
office365.com for limited_users, the noauth_sites in first place will 
disable all "deny" rules.


I'm wrong ?


On 24/04/2023 11:22, Amos Jeffries wrote:

On 24/04/2023 11:33 am, David Touzeau wrote:
We have a "problem" with ACLs, and I don't know how to address this 
situation in Squid 5.8

Let me explain:
We have an Active Directory group named limited_users that is only 
allowed to surf on a very limited list of websites.
These users are therefore forbidden to surf on all sites not listed 
in allowed_domains
On the other hand, we have websites in noauth_sites that do not need 
to be authenticated by squid but are not allowed to be used by 
limited_users group


In logic, we would write the following ACLs.

external_acl_type ads_group ttl=3600 negative_ttl=1 concurrency=50 
children-startup=1 children-idle=1 children-max=20 ipv4 %LOGIN 
/lib/squid3/groups.pl


acls limited_users ads_group limited_users


This acl requires both login to succeed and group to match in order to 
return MATCH.




acls allowed_domains dstdomain siteallowed.com
acls allowed_domains dstdomain siteallowed.fr
acls allowed_domains dstdomain siteallowed.ch

acls noauth_sites dstdomain office365.com


http_access deny !allowed_domains limited_users all #ACL1
http_access allow noauth_sites #ACL2

But in this case, accessing to office365.com force Squid to send the 
407 Authentication  request in order to calculate the limited_users 
in  #ACL1, then the second ACL is not effective because the request 
is blocked before by the 407.


Sounds correct.

The %LOGIN switch in the external ACL ads_group activates the 
identification mode.


Yes.

If we use the %un switch instead , it works but it becomes the 
counter, ACL#1 is not processed anymore since the authentication is 
not requested because the %un switch is too smooth.


Yes. The login is not existing, therefore has no group.


What I don't understand is that SQUID is trying to calculate the 
limited_user object when the first allowed_domain object already 
returns FALSE.


You configured the "!" (not) operator to invert the match result.
Returning FALSE becomes a MATCH.


Whatever the result of the objects that follow allowed_domain, the 
rule will always fail.


Not quite. A request that provides credentials associated with the 
expected group will pass.


In the case where limited_user is in the first place, the logic is 
correct.


Two questions:

Is there a way for SQUID to not compute all http_access objects if 
the first one fails?


No. Because there is more than one HTTP request going on here. Each 
request is independent for Squid.




What would be the best rule that could meet this goal?


Structure your access lines as such;

  # things not requiring login are checked first
  http_access allow noauth_sites

  # then do the login
  http_access deny !login

  # then check things that need login
  http_access deny limited_users !allowed_sites


HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] %LOGIN place in squid 5.8 acls

2023-04-23 Thread David Touzeau
We have a "problem" with ACLs, and I don't know how to address this 
situation in Squid 5.8

Let me explain:
We have an Active Directory group named limited_users that is only 
allowed to surf on a very limited list of websites.
These users are therefore forbidden to surf on all sites not listed in 
allowed_domains
On the other hand, we have websites in noauth_sites that do not need to 
be authenticated by squid but are not allowed to be used by 
limited_users group


In logic, we would write the following ACLs.

external_acl_type ads_group ttl=3600 negative_ttl=1 concurrency=50 
children-startup=1 children-idle=1 children-max=20 ipv4 %LOGIN 
/lib/squid3/groups.pl

acls limited_users ads_group limited_users
acls allowed_domains dstdomain siteallowed.com
acls allowed_domains dstdomain siteallowed.fr
acls allowed_domains dstdomain siteallowed.ch

acls noauth_sites dstdomain office365.com


http_access deny !allowed_domains limited_users all #ACL1
http_access allow noauth_sites #ACL2


But in this case, accessing to office365.com force Squid to send the 407 
Authentication  request in order to calculate the limited_users in  
#ACL1, then the second ACL is not effective because the request is 
blocked before by the 407.
The %LOGIN switch in the external ACL ads_group activates the 
identification mode.
If we use the %un switch instead , it works but it becomes the counter, 
ACL#1 is not processed anymore since the authentication is not requested 
because the %un switch is too smooth.


What I don't understand is that SQUID is trying to calculate the 
limited_user object when the first allowed_domain object already returns 
FALSE.
Whatever the result of the objects that follow allowed_domain, the rule 
will always fail.

In the case where limited_user is in the first place, the logic is correct.

Two questions:

Is there a way for SQUID to not compute all http_access objects  if the 
first one fails?


What would be the best rule that could meet this goal?

regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5: server_cert_fingerprint not working fine...

2022-11-19 Thread David Touzeau

Thanks Amos for this clarification,

We also have the same needs and indeed, we face with the same approach.

It is possible that the structure of Squid could not, in some cases, 
recovering this type of information.
Although the concept of a proxy is neither more nor less than a big 
browser that surfs instead of the client browsers.


The SHA1 and certificate information reception are very valuable because 
it ensures better detection of compromised sites (many malicious sites 
use the same information in their certificates).

This allows detecting "nests" of malicious sites automatically.

Unfortunately, there is madness in the approach to security, there is a 
race to strengthen the security of tunnels (produced by Google and 
browsers vendors).

What is the advantage of encrypting wikipedia and Youtube channels?

On the other hand, it is crucial to look inside these streams to detect 
threats.

This is antinomic...

So TLS 1.3 and soon the use of QUIC with UDP 80/443 will make use of a 
proxy useless as these features are rolled out  (trust Google to 
motivate them)

Unless the proxy manages to follow this protocol madness race...

For this reason, firewall manufacturers propose the use of client 
software that fills the gap of protocol visibility in their gateway 
products or you -can see a growth of workstation protections , such EDR 
concept


Just an ideological and non-technical approach...

Regards

Le 19/11/2022 à 16:50, Amos Jeffries a écrit :

On 19/11/2022 2:55 am, UnveilTech - Support wrote:

Hi Amos,

We have tested with a "ssl_bump bump" ("ssl_bump all" and "ssl_bump 
bump sslstep1"), it does not solve the problem.
According to Alex, we can also confirm it's a bug with Squid 5.x and 
TLS 1.3.


Okay.

It seems Squid is only compatible with TLS 1.2, it's not good for the 
future...


One bug (or lack of ability) does not make the entire protocol 
"incompatible". It only affects people trying to do the particular 
buggy action.
Unfortunately for you (and others) it happens to be accessing this 
server cert fingerprint.


I/we have been clear from the beginning that *when used properly* 
TLS/SSL cannot be "bump"ed - that is true for all versions of TLS and 
SSL before it. In that same "bump" use-case the server does not 
provide *any* details, it just rejects the proxy attempted connection. 
In some paranoid security environments the server can reject even for 
"splice" when the clientHello is passed on unchanged by the proxy. 
HTTPS use on the web is typically *neither* of those "proper" setups 
so SSL-Bump "bump" in general works and "splice" almost always.


Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5: server_cert_fingerprint not working fine...

2022-11-19 Thread David Touzeau

Thanks Amos for this clarification,

We also have the same needs and indeed, we face with the same approach.

It is possible that the structure of Squid could not, in some cases, 
recovering this type of information.
Although the concept of a proxy is neither more nor less than a big 
browser that surfs instead of the client browsers.


The SHA1 and certificate information reception are very valuable because 
it ensures better detection of compromised sites (many malicious sites 
use the same information in their certificates).

This allows detecting "nests" of malicious sites automatically.

Unfortunately, there is madness in the approach to security, there is a 
race to strengthen the security of tunnels (produced by Google and 
browsers vendors).

What is the advantage of encrypting wikipedia and Youtube channels?

On the other hand, it is crucial to look inside these streams to detect 
threats.

This is antinomic...

So TLS 1.3 and soon the use of QUIC with UDP 80/443 will make use of a 
proxy useless as these features are rolled out  (trust Google to 
motivate them)

Unless the proxy manages to follow this protocol madness race...

For this reason, firewall manufacturers propose the use of client 
software that fills the gap of protocol visibility in their gateway 
products or you -can see a growth of workstation protections , such EDR 
concept


Just an ideological and non-technical approach...

Regards

Le 19/11/2022 à 16:50, Amos Jeffries a écrit :

On 19/11/2022 2:55 am, UnveilTech - Support wrote:

Hi Amos,

We have tested with a "ssl_bump bump" ("ssl_bump all" and "ssl_bump 
bump sslstep1"), it does not solve the problem.
According to Alex, we can also confirm it's a bug with Squid 5.x and 
TLS 1.3.


Okay.

It seems Squid is only compatible with TLS 1.2, it's not good for the 
future...


One bug (or lack of ability) does not make the entire protocol 
"incompatible". It only affects people trying to do the particular 
buggy action.
Unfortunately for you (and others) it happens to be accessing this 
server cert fingerprint.


I/we have been clear from the beginning that *when used properly* 
TLS/SSL cannot be "bump"ed - that is true for all versions of TLS and 
SSL before it. In that same "bump" use-case the server does not 
provide *any* details, it just rejects the proxy attempted connection. 
In some paranoid security environments the server can reject even for 
"splice" when the clientHello is passed on unchanged by the proxy. 
HTTPS use on the web is typically *neither* of those "proper" setups 
so SSL-Bump "bump" in general works and "splice" almost always.


Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos - Cannot decrypt ticket for HTTP

2022-11-16 Thread David Touzeau

Hi

perhaps this one
https://wiki.articatech.com/en/proxy-service/troubleshooting/gss-cannot-decrypt-ticket


Le 16/11/2022 à 05:11, Михаил a écrit :

Hi everybody,
Could you help me to setup my new squid server? I have a problem with 
keytab authorization.
2022/11/16 11:35:39| ERROR: Negotiate Authentication validating user. 
Result: {result=BH, notes={message: gss_accept_sec_context() failed: 
Unspecified GSS failure.  Minor code may provide more information. 
Cannot decrypt ticket for HTTP/uisproxy-rop.***.***.corp@***.***.CORP 
using keytab key for HTTP/uisproxy-rop.***.***.corp@***.**.CORP; }}

Got NTLMSSP neg_flags=0xe2088297
2022/11/16 11:35:40| ERROR: Negotiate Authentication validating user. 
Result: {result=BH, notes={message: gss_accept_sec_context() failed: 
Unspecified GSS failure.  Minor code may provide more information. 
Cannot decrypt ticket for HTTP/uisproxy-rop.***.***.corp@***.***.CORP 
using keytab key for HTTP/uisproxy-rop.***.***.corp@***.***.CORP; }}
# kinit -V -k -t /etc/squid/keytab/uisproxy-rop-t.keytab 
HTTP/uisproxy-rop.***.***.corp

Using default cache: /tmp/krb5cc_0
Using principal: HTTP/uisproxy-rop.***.***.corp@***.***.CORP
Using keytab: /etc/squid/keytab/uisproxy-rop-t.keytab
Authenticated to Kerberos v5
# klist -ke /etc/squid/keytab/uisproxy-rop-t.keytab
Keytab name: FILE:/etc/squid/keytab/uisproxy-rop-t.keytab
KVNO Principal
 
--

   3 uisproxy-rop-t$@***.***.CORP (arcfour-hmac)
   3 uisproxy-rop-t$@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 uisproxy-rop-t$@***.***.CORP (aes256-cts-hmac-sha1-96)
   3 UISPROXY-ROP-T$@***.***.CORP (arcfour-hmac)
   3 UISPROXY-ROP-T$@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 UISPROXY-ROP-T$@***.***.CORP (aes256-cts-hmac-sha1-96)
   3 HTTP/uisproxy-rop.***.***.corp@***.***.CORP (arcfour-hmac)
   3 HTTP/uisproxy-rop.***.***.corp@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 HTTP/uisproxy-rop.***.***.corp@***.***.CORP (aes256-cts-hmac-sha1-96)
   3 host/uisproxy-rop@***.***.CORP (arcfour-hmac)
   3 host/uisproxy-rop@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 host/uisproxy-rop@***.***.CORP (aes256-cts-hmac-sha1-96)
# klist -kt
Keytab name: FILE:/etc/squid/keytab/uisproxy-rop-t.keytab
KVNO Timestamp           Principal
 --- 
--

   3 11/16/2022 11:30:50 uisproxy-rop-t$@***.***.CORP
   3 11/16/2022 11:30:50 uisproxy-rop-t$@***.***.CORP
   3 11/16/2022 11:30:50 uisproxy-rop-t$@***.***.CORP
   3 11/16/2022 11:30:50 UISPROXY-ROP-T$@***.***.CORP
   3 11/16/2022 11:30:50 UISPROXY-ROP-T$@***.***.CORP
   3 11/16/2022 11:30:50 UISPROXY-ROP-T$@***.***.CORP
   3 11/16/2022 11:30:50 HTTP/uisproxy-rop.***.***.corp@***.***.CORP
   3 11/16/2022 11:30:50 HTTP/uisproxy-rop.***.***.corp@***.***.CORP
   3 11/16/2022 11:30:50 HTTP/uisproxy-rop.***.***.corp@***.***.CORP
   3 11/16/2022 11:30:50 host/uisproxy-rop@***.***.CORP
   3 11/16/2022 11:30:50 host/uisproxy-rop@***.***.CORP
   3 11/16/2022 11:30:50 host/uisproxy-rop@***.***.CORP

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL based DNS server list

2022-11-02 Thread David Touzeau

It should be a good feature request that the Squid DNS client supports eDNS
eDNS can be used to send the source client IP address received by Squid 
to a remote DNS.
In this case the DNS will be able to change its behavior depending on 
the source IP address.


Amos, Alex ?

Le 30/10/2022 à 18:00, Grant Taylor a écrit :

On 10/25/22 7:27 PM, Sneaker Space LTD wrote:

Hello,


Hi,

Is there a way to use specific DNS servers based on the user or 
connecting IP address that is making the connection by using acls or 
any other method? If so, can someone send an example.


"Any other method" covers a LOT of things.  Including things outside 
of Squid's domain.


You could probably do some things with networking such that different 
clients connected to different instances of Squid each configured to 
use different DNS servers.  --  This is a huge hole in the ground and 
can cover a LOT of things.  All of which are outside of Squid's domain.





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.7 + bump ERR_READ_ERROR|WITH_SERVER

2022-10-12 Thread David Touzeau
/2022 à 20:00, Alex Rousskov a écrit :

On 10/12/22 12:45, David Touzeau wrote:

Hi

We using squid 5.7 after adding ssl-bump we have sometimes several 
502 error  with extended error ERR_READ_ERROR|WITH_SERVER


1665589818.831 11 192.168.1.13 NONE_NONE/502 192616 OPTIONS 
https://www2.deepl.com/jsonrpc?method=LMT_split_text - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589839.288 11 192.168.1.13 NONE_NONE/502 506759 POST 
https://pollserver.lastpass.com/poll_server.php - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589719.879 44 192.168.1.13 NONE_NONE/502 506954 GET 
https://contile.services.mozilla.com/v1/tiles - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"



What does it means.


502 with ERR_READ_ERROR|WITH_SERVER may mean several things 
(unfortunately). Given HIER_NONE, I would suspect that Squid could not 
find a valid destination for the request. There is a similar recent 
squid-users thread at 
http://lists.squid-cache.org/pipermail/squid-users/2022-October/025289.html




how can we fix it ?


The first step is to identify what causes these errors.

Can you reproduce this problem at will? Perhaps by trying going to 
https://dnslabeldoesnotexist.com mentioned at the above thread? If you 
can, consider sharing (a pointer to) a compressed debugging cache.log 
from a test box that does not expose any internal secrets, as detailed 
at 
https://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction



HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support


*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.7 + bump ERR_READ_ERROR|WITH_SERVER

2022-10-12 Thread David Touzeau

Hi

We using squid 5.7 after adding ssl-bump we have sometimes several 502 
error  with extended error ERR_READ_ERROR|WITH_SERVER


1665589818.831 11 192.168.1.13 NONE_NONE/502 192616 OPTIONS 
https://www2.deepl.com/jsonrpc?method=LMT_split_text - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589839.288 11 192.168.1.13 NONE_NONE/502 506759 POST 
https://pollserver.lastpass.com/poll_server.php - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589719.879 44 192.168.1.13 NONE_NONE/502 506954 GET 
https://contile.services.mozilla.com/v1/tiles - HIER_NONE/-:- text/html 
mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"


What does it means.

how can we fix it ?

regards


--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance recommendation

2022-09-24 Thread David Touzeau

Hi

We have some experience on cluster configuration.

https://wiki.articatech.com/en/proxy-service/hacluster

As using Kubernetes for Squid and for 40K users is a very "risky adventure".

Squid requires a very high disk performance (I/O) which means both a 
good hard disk drive and a decent controller card.


You will reach a functional limit of kubernete which by structure is not 
adapted to this type of service


Of course you can continue in this way

But we see this a lot from experience:

"To take on the load you're going to install a lot of instances on 
multiple virtualization servers.

Whereas 2 or 3 physical machines could handle it all."


Le 20/09/2022 à 21:52, Pintér Szabolcs a écrit :


Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.


Parameters: I need HA, caching(little objects only not like big 
windows updates), scaling(It is just secondly), and I want to use and 
modify(in production,in working hours) complex black- and whitelists


I have some idea:

1. A huge kubernetes cluster

pro: Easy to scale, change the config and update.

contra: I'm afraid of the network latency.(because of the most plus 
layers e.g. vm network stack, kubernetes network stack ith vxlan and 
etc.).


2. Simple VM-s with a HAProxy in tcp mode

pro: less network latency(I think)

contra: More time to Administration


Has anybody any experience with squid in kubernetes(or similar 
technology) with a large number of useres?


What do you think which is the most perfect solution or do you have 
other idea for the implementation?


Thanks!

Best, Szabolcs

--
*Pintér Szabolcs Péter*
H-1117 Budapest, Neumann János u. 1. A épület 2. emelet
+36 1 489-4600
+36 30 471-3827
spin...@npsh.hu


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid][v5.6] : problem with "slow" or "fast" acl

2022-09-06 Thread David Touzeau

Hi Eric.

We had the same restrictions with the fast or slow ACLs.
Have you thought about creating a squid helper that calculates your needs?
So maybe you can get around this by using the acl "note" acl note xxx 
xxx which turns your helper results (slow) into "fast".




Le 05/09/2022 à 14:56, PERROT Eric DNUM SDCAST BST SSAIM a écrit :

Hello,

We use directives "reply_body_max_size", "request_body_max_size" and 
"delay_access" to limit upload, download and passband in our infra.


This configuration existes since a while, but we have noticed that 
with squid v4.16, our delay pool didn't react as we wanted anymore. We 
were excpeting improvment upgrading squid to v5.6. But it got worth :

- restriction still didn't work
- and squid had a segmentation fault each time some acl where used

Thanks to Alex Rousskov (bug 5231), after some investigation, it 
appears that we used "slow" acl (proxy_auth an time acl) where only 
"fast" acl where authorized...). The bug is still open as squid has 
not flagged the problem in cache logs,


My email, is to show you our configuration and the behaviour we 
espect, and the behaviour we finally have.
1 - squd v4.12 : we expect to limit downlod/upload and passband during 
working time for all login except those starting with cg_*

"
|## Gestion de bande passante ##
acl bureau time 09:00-12:00
acl bureau time 14:00-17:00
# Comptes generiques
|||acl my_ldap_auth proxy_auth REQUIRED
|acl cgen proxy_auth_regex cg_
reply_body_max_size 800 MB *bureau !cgen*
request_body_max_size 5 MB
# La limite de bande passante ne fonctionne plus avec le BUMP
# A tester ...
delay_pools 1
# Pendant time sauf cgen, emeraude
delay_class 1 4
delay_access 1 allow**||*||my_ldap_auth !cgen||***!emeraude
delay_access 1 deny all
# 512000 = 5120 kbits/user 640 ko
# 307200 = 3072 kbits/user 384 ko
delay_parameters 1 -1/-1 -1/-1 -1/-1 107200/107200
##|
"
=> with this configuration, the delay pool seemed not to work anymore, 
so we upgraded squid to v5.6. Which caused the squid segmentation 
fault...


2 - squid v5.6 : to solve the segmentation fault, we had to take off 
my_ldap_auth/cgen (proxy_auth acl) and bureau (time acl). The 
limitation work again, but we are no more able to limit restriction 
during working time, or for spécific login...

"
|## Gestion de bande passante ##
acl bureau time 09:00-12:00
acl bureau time 14:00-17:00
# Comptes generiques
acl userrgt src 10.0.0.0/8
|||acl my_ldap_auth proxy_auth REQUIRED
|acl cgen proxy_auth_regex cg_
reply_body_max_size 800 MB *userrgt*
request_body_max_size 5 MB
# La limite de bande passante ne fonctionne plus avec le BUMP
# A tester ...
delay_pools 1
# Pendant time sauf cgen, emeraude
delay_class 1 4
delay_access 1 allow||****!emeraude
delay_access 1 deny all
# 512000 = 5120 kbits/user 640 ko
# 307200 = 3072 kbits/user 384 ko
delay_parameters 1 -1/-1 -1/-1 -1/-1 107200/107200
##|
"

Can you tell me if what we want to do is still possible? Limiting 
upload/download/passband for all logged user except those starting by 
cg_*..?.


Thank you for the time reading, and thank you for your answers.

Regards,

Eric Perrot




Pour une administration exemplaire, préservons l'environnement.
N'imprimons que si nécessaire.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] the free domains blacklists are gone..

2022-07-02 Thread David Touzeau


Hi Elieze,

Here a set of lists.

https://github.com/KeyofBlueS/hBlock-Launcher/blob/master/list.txt
https://lists.noads.online/lists/compilation.txt
https://github.com/GlacierSheep/DomainBlockList/tree/5bfcb0c2eabed2f9c82f0bac260e1d88550b5789
https://github.com/maravento/blackweb/blob/master/blackweb.tar.gz
https://github.com/ShadowWhisperer/BlockLists/tree/master/Lists
https://github.com/jerryn70/GoodbyeAds
https://blocklist.site/
https://www.blocked.org.uk
https://blocklist-tools.developerdan.com/blocklists
https://blocklistproject.github.io/Lists/#lists
https://blokada.org/blocklists/ddgtrackerradar/standard/hosts.txt
https://github.com/LINBIT/csync2
https://github.com/StevenBlack/hosts/blob/master/data/KADhosts/hosts
https://github.com/stamparm/maltrail
https://raw.githubusercontent.com/notracking/hosts-blocklists/master/dnscrypt-proxy/dnscrypt-proxy.blacklist.txt
https://blocklist-tools.developerdan.com/entries/search?q=nettflix.website
https://github.com/Import-External-Sources/hosts-sources/tree/master/data
https://hosts.gameindustry.eu/abusive-adblocking/
https://www.bentasker.co.uk/adblock/autolist.txt
https://github.com/VenexGit/DeepGuard
https://firebog.net/


Le 30/06/2022 à 19:00, ngtech1...@gmail.com a écrit :


Hey,

I have tried to download blacklists from couple sites that was 
publishing these in the past and all of them are gone.


The only free resource I have found was DNS blacklists.

I just wrote a dstdomain external helper that can work with a SQL DB 
and it seems to run pretty nice.


Until now I have tried MySQL, Maraidb, MSSQL, PostgreSQL and all of 
them works pretty nice.


There is an overhead in storing the data in a DB compared to a plain 
text file but the benefits are worth it.


The only lists I have found are for Pihole for example at:

https://github.com/blocklistproject/Lists

So now I just need to convert these to dstdomain format and it will 
work with Squid pretty nice.


Any recommendations for free lists are welcome.

Thanks,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Web: https://ngtech.co.il/ <https://ngtech.co.il/>

My-Tube: https://tube.ngtech.co.il/ <https://tube.ngtech.co.il/>


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support
    
    
*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-26 Thread David Touzeau

Hi Eliezer

if you want to do transparent mode without having to put squid squidboix 
in front of your fortinet.


If you want to do transparent mode while your fortinet aggregates 
several VLANs, the WCCP mode is necessary


So you can control everything through your fortigate

By the way, fortinet offers their proxy based on WCCP to ensure a 
consistent integration with fortigate


My configuration is very simple to replicate :

We have added a service ID 80 on fortigate but failed caused by the 
squid bug


config system wccp
 edit "80"
 set router-id 10.10.50.1
 set group-address 0.0.0.0
 set server-list 10.10.50.2 255.255.255.255
 set server-type forward
 set authentication disable
 set forward-method GRE
 set return-method GRE
 set assignment-method HASH
 next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash 
priority=240 ports=80,443

wccp2_address 0.0.0.0
wccp2_weight 1


Le 24/06/2022 à 13:17, ngtech1...@gmail.com a écrit :


I am not sure and can spin up my Forti but from what I remember there 
are PBR functions in the Forti.


Why would a WCCP be required? To pass only ports 80 and 443 instead of 
all traffic?



--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-24 Thread David Touzeau

Hi Elizer

No, Fortinet is good.

In this case is connecting HTTP/HTTPs with WCCP from Fortinet to squid 
did not work, because SQUID refuse to communicate with Fortinet 
according to "Ignoring WCCPv2 message: truncated record" issue.


With Squid,  Fortinet report that is no WCCP server available.


Le 23/06/2022 à 18:33, ngtech1...@gmail.com a écrit :


Hey David,

Just trying to understand something:

Aren’t Fortinet something that should replace squid?

I assumed that it should do a much better job then Squid in many aeras.

What a Fortinet(I have one…) is not covering?

Thanks,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

Web: https://ngtech.co.il/

My-Tube: https://tube.ngtech.co.il/

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Thursday, 23 June 2022 19:12
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring 
WCCPv2 message: truncated record


Hi Alex,

is the v5 commit 7a73a54 already included in the latest 5.5,5.6 versions?

This is very unfortunate because WCCP is used by default by Fortinet 
firewall devices. It should be very popular.

Indeed, Fortinet is flooding the market.
I can volunteer for the funding and the necessary testing to be done.

Le 23/06/2022 à 14:44, Alex Rousskov a écrit :

On 6/21/22 07:43, David Touzeau wrote:


We trying to using WCCP with Fortigate without success Squid
version  5.5 always claim "Ignoring WCCPv2 message: truncated
record"

What can be the cause ?


The most likely cause are bugs in untested WCCP fixes (v5 commit
7a73a54). Dormant draft PR 970 contains unfinished fixes for the
problems in that previous attempt:
https://github.com/squid-cache/squid/pull/970

IMHO, folks that need WCCP support should invest into that
semi-abandoned Squid feature or risk losing it. WCCP code needs
serious refactoring and proper testing. There are currently no
Project volunteers that have enough resources and capabilities to
do either.


https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F



HTH,

Alex.



We have added a service ID 80 on fortigate

config system wccp
 edit "80"
 set router-id 10.10.50.1
 set group-address 0.0.0.0
 set server-list 10.10.50.2 255.255.255.255
 set server-type forward
 set authentication disable
 set forward-method GRE
 set return-method GRE
 set assignment-method HASH
 next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp
flags=src_ip_hash priority=240 ports=80,443
wccp2_address 0.0.0.0
wccp2_weight 1

Squid claim in debug log

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206)
wccp2HandleUdp: wccp2HandleUdp: Called.
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect:
FD 38, type=1, handler=1, client_data=0, timeout=0
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230)
wccp2HandleUdp: Incoming WCCPv2 I_SEE_YOU length 112.
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message:
truncated record
 exception location: wccp2.cc(1133) CheckSectionLength



-- 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--

*Technical Support*

    


*David Touzeau***

Orgerus, Yvelines, France

*Artica Tech*


P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support


*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-23 Thread David Touzeau

Hi Alex,

is the v5 commit 7a73a54 already included in the latest 5.5,5.6 versions?

This is very unfortunate because WCCP is used by default by Fortinet 
firewall devices. It should be very popular.

Indeed, Fortinet is flooding the market.
I can volunteer for the funding and the necessary testing to be done.

Le 23/06/2022 à 14:44, Alex Rousskov a écrit :

On 6/21/22 07:43, David Touzeau wrote:

We trying to using WCCP with Fortigate without success Squid version  
5.5 always claim "Ignoring WCCPv2 message: truncated record"


What can be the cause ?


The most likely cause are bugs in untested WCCP fixes (v5 commit 
7a73a54). Dormant draft PR 970 contains unfinished fixes for the 
problems in that previous attempt:

https://github.com/squid-cache/squid/pull/970

IMHO, folks that need WCCP support should invest into that 
semi-abandoned Squid feature or risk losing it. WCCP code needs 
serious refactoring and proper testing. There are currently no Project 
volunteers that have enough resources and capabilities to do either.


https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F 




HTH,

Alex.



We have added a service ID 80 on fortigate

config system wccp
 edit "80"
 set router-id 10.10.50.1
 set group-address 0.0.0.0
 set server-list 10.10.50.2 255.255.255.255
 set server-type forward
 set authentication disable
 set forward-method GRE
 set return-method GRE
 set assignment-method HASH
 next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash 
priority=240 ports=80,443

wccp2_address 0.0.0.0
wccp2_weight 1

Squid claim in debug log

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206) wccp2HandleUdp: 
wccp2HandleUdp: Called.
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect: FD 38, 
type=1, handler=1, client_data=0, timeout=0
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230) wccp2HandleUdp: 
Incoming WCCPv2 I_SEE_YOU length 112.
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message: 
truncated record

 exception location: wccp2.cc(1133) CheckSectionLength



--

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support
    
    
*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-21 Thread David Touzeau

Hi

We trying to using WCCP with Fortigate without success Squid version  
5.5 always claim "Ignoring WCCPv2 message: truncated record"


What can be the cause ?

We have added a service ID 80 on fortigate

config system wccp
    edit "80"
    set router-id 10.10.50.1
    set group-address 0.0.0.0
    set server-list 10.10.50.2 255.255.255.255
    set server-type forward
    set authentication disable
    set forward-method GRE
    set return-method GRE
    set assignment-method HASH
    next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash 
priority=240 ports=80,443

wccp2_address 0.0.0.0
wccp2_weight 1

Squid claim in debug log

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206) wccp2HandleUdp: 
wccp2HandleUdp: Called.
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect: FD 38, 
type=1, handler=1, client_data=0, timeout=0
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230) wccp2HandleUdp: 
Incoming WCCPv2 I_SEE_YOU length 112.
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message: truncated 
record

    exception location: wccp2.cc(1133) CheckSectionLength



--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid load simulation tools for performance testing

2022-05-25 Thread David Touzeau

Use "siege" it can simulate x users for x urls

You can also use our free of charge appliance that allows you to easily 
use siege.


https://wiki.articatech.com/en/proxy-service/tuning/stress-your-proxy-server



Le 10/05/2022 à 07:33, Punyasloka Arya a écrit :

Dear ALL,

We have just installed Squid 5.5 (stable version) from source on ubuntu
20.0.4.
Before putting in the production network, we want to test the performance of
squid by monitoring critical parameters like response time,  cache hits, cache
misses etc
We would like to know tools/software/scripts to simulate load conditions for
500 users with at least 1K connections.

Any help is greatly appreciated.

From
Punyasloka Arya
PUNYASLOKA ARYAपुण्यश्लोक आर्या
Staffno:3880,Netops,TS(B)
Senior Research Engineer   वरिष्ठ अनुसंधान अभियंता
C-DOT  सी-डॉट
Electronics City,Phase-1   इलैक्ट्रॉनिक्स सिटी फेज़ I
Hosur Road,Bangalore   होसूर रोड, बेंगलूरु
560100 560100
### Please consider the environment and print this email only if necessary
.
Go Green ###

Disclaimer :
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of
this
information is strictly prohibited. The sender does not accept liability
for any errors or omissions in the contents of this message, which arise
as
a
result.

--
Open WebMail Project (http://openwebmail.org)

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support

    
*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.4 : ERR_PROTOCOL_UNKNOWN and exception=18686e4e

2022-03-05 Thread David Touzeau

Hi

added  exterr="%err_code|%err_detail" in logging and result return some 
request with ERR_PROTOCOL_UNKNOWN|exception=18686e4e


1646498399.887 46 176.12.1.2 NONE_NONE/000 0 CONNECT 62.67.238.138:443 - 
HIER_NONE/-:- exterr="ERR_PROTOCOL_UNKNOWN|exception=18686e4e"


What does "exception=18686e4e" means, how to avoid/force squid to 
forward data ?


Does /on_unsupported_protocol /should fix this behavior ?

regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid plugin sponsor

2022-02-14 Thread David Touzeau

Eliezer,

First of all, thank you for twisting your brain at our request.
I know your skills and your time is very valuable.

HotSpot+Cookies can be interesting but it has a constraint that 
kerberos/NTLM SSO fixes:


1)  Redirecting connections to a HotSpot requires Squid to be able to 
forward the redirection.
When using SSL sites without MAN-IN-THE-MIDDLE, we fall into structural 
issues.


2)  Even if this problem can be circumvented, it is necessary for the 
user to identify himself on the Splash Screen to understand who he is.

While this user is already identified with his Windows session.


Forget about NTLMv2 which does not provide the "Fake" anymore
The advantage of fake_ntlm is that when Squid performs its 407, 
naturally the browser sends its windows session username whether it is 
connected to an Active Directory or not.


This is what we want to catch in the end.

The HotSpot way is a half-solution. It circumvents the limit of 
identification but adds new network constraints you mention.


The dream is a plugin that forces Squid to generate a 407, asks to 
browsers "Give me your user account whatever it is" and allows access in 
any case to place the user=xxx switch for the next processing.


It almost looks like the "ident" method
http://www.squid-cache.org/Misc/ident.html
Without having to install a piece of software and a listening port on 
all the computers in the network


Le 14/02/2022 à 19:50, Eliezer Croitoru a écrit :


Hey David,

Transparent authentication using Kerberos can only be used with a 
directory service.


There are couple ways to authenticate…

You can use an “automatic” hotspot website that will use cookies to 
authenticate the client once in a very long time.


If the client request is not recognized or the client is not 
recognized for any reason it’s reasonable to redirect him into a 
captive portal.


I can try to work on a demo but I need to know more details about the 
network structure and to verify what is possible and not.


Every device ie Switch and router or AP etc should be mentioned to 
understand the scenario.


While you assume it’s a chimera I still believe it’s just a three 
heads Kerberos which… was proved to exists… in the movies and in the 
virtual world.


Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*David Touzeau 
*Sent:* Monday, February 14, 2022 03:21
*To:* Eliezer Croitoru 
*Cc:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] Squid plugin sponsor


Thank you for your answer Elizer for all these details, but I've done 
some research to avoid soliciting the community for simple questions.


The objective is to not ask anything to the user and not to break his 
navigation with a session request.
To summarize, An SSO identification like kerberos with the following 
constraints:


 1. unknown Mac addresses
 2. DHCP IP with a short lease
 3. No Active Directory connection.




The network is in VLAN (Mac addr masked) and in DHCP with a short lease.
Even the notion of hotspot is complicated when you can't focus on a 
network attribute.

I try to find a way directly in the HTTP protocol.
This is the reason why a fake could be a solution.

But I think I'm trying to catch a chimera and we'll have to redesign 
the network architecture.


regards

Le 12/02/2022 à 06:27, Eliezer Croitoru a écrit :

Hey David,

The general name of this concept is SSO service.

It can have single or multiple backends.

The main question is how to implement the solution in the optimal
way possible.
(taking into account money, coding complexity and other humane parts)

You will need to authenticate the client against the main AUTH
service.

There is a definitive way or statistical way to implement this
solution.

With AD or Kerberos it’s possible to implement the solution in
such a way that windows will
“transparently” authenticate to the proxy service.

However you must understand that all of this requires an
infrastructure that will provide every piece of the setup.

If your setup doesn’t contains RDP like servers then it’s possible
that you can authenticate a user with an IP compared
to pinning every connection to a specific user.

Also, the “cost” of non-transparent authentication is that the
user will be required to enter (manually or automatically)
the username and the password.

An HotSpot like setup is called “Captive Portal” and it’s a very
simple setup to implement with active directory.

It’s also possible to implement a transparent authentication for
such a setup based on session tokens.

You actually don’t need to create a “fake” helper for such a setup
but you can create one that is based on Linux.

It’s an “Advanced” topic but if you do ask me it’s possible that
you can take this in steps.

The first step wo

Re: [squid-users] Squid plugin sponsor

2022-02-13 Thread David Touzeau


Thank you for your answer Elizer for all these details, but I've done 
some research to avoid soliciting the community for simple questions.


The objective is to not ask anything to the user and not to break his 
navigation with a session request.
To summarize, An SSO identification like kerberos with the following 
constraints:


1. unknown Mac addresses
2. DHCP IP with a short lease
3. No Active Directory connection.




The network is in VLAN (Mac addr masked) and in DHCP with a short lease.
Even the notion of hotspot is complicated when you can't focus on a 
network attribute.

I try to find a way directly in the HTTP protocol.
This is the reason why a fake could be a solution.

But I think I'm trying to catch a chimera and we'll have to redesign the 
network architecture.


regards

Le 12/02/2022 à 06:27, Eliezer Croitoru a écrit :


Hey David,

The general name of this concept is SSO service.

It can have single or multiple backends.

The main question is how to implement the solution in the optimal way 
possible.

(taking into account money, coding complexity and other humane parts)

You will need to authenticate the client against the main AUTH service.

There is a definitive way or statistical way to implement this solution.

With AD or Kerberos it’s possible to implement the solution in such a 
way that windows will

“transparently” authenticate to the proxy service.

However you must understand that all of this requires an 
infrastructure that will provide every piece of the setup.


If your setup doesn’t contains RDP like servers then it’s possible 
that you can authenticate a user with an IP compared

to pinning every connection to a specific user.

Also, the “cost” of non-transparent authentication is that the user 
will be required to enter (manually or automatically)

the username and the password.

An HotSpot like setup is called “Captive Portal” and it’s a very 
simple setup to implement with active directory.


It’s also possible to implement a transparent authentication for such 
a setup based on session tokens.


You actually don’t need to create a “fake” helper for such a setup but 
you can create one that is based on Linux.


It’s an “Advanced” topic but if you do ask me it’s possible that you 
can take this in steps.


The first step would be to use a session helper that will authenticate 
the user and will identify the user

based on it’s IP address.

If it’s a wireless setup you can use a radius based authentication ( 
can also be implemented on a wired setup).


Once you will authenticate the client transparently or in another way 
you can limit the usage of the username to
a specific client and with that comes a guaranteed situation that a 
username will not be used from two sources.


I don’t know about your experience but the usage of a captive portal 
is very common In such situations.


The other option is to create an agent in the client side that will 
identify the user against the proxy/auth service
and it will create a situation which an authorization will be acquired 
based on some degree of authentication.


In most SSO environments it’s possible that per request/domain/other 
there is a transparent validation.


In all the above scenarios which requires authentication the right way 
to do it would be to use the proxy as

a configured proxy compared to transparent.

I believe that one thing to consider is that once you authenticate 
against a RADIUS service you would just

minimize the user interaction.

The main point from what I understand is to actually minimize the 
authentication steps of the client.


My suggestion for you is to first try and asses the complexity of a 
session helper, raidus and captive portal.


These are steps that you will need to do in order to asses the 
necessity of transparent SSO.


Also take your time to compare how a captive portal is configured in 
the next general products:


  * Palo Alto
  * FortiGate
  * Untangle
  * Others

From the documentation you would see the different ways and “grades” 
that they implement the solutions.


Once you know what the market offers and their equivalent costs you 
will probably understand what
you want and what you can afford to invest in the development process 
of each part of setup.


All The Bests,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Friday, February 11, 2022 17:03
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] Squid plugin sponsor

Hello

Thank you but this is not the objective and this is the reason for 
needing the "fake".
Access to Kerberos or NTLM ports of the AD, is not possible. An LDAP 
server would be present with accounts replication.

The idea is to do a silent authentication without joining the AD
We did not need the double user/password credential, only the user 
sent by the browser is required


If the user has

Re: [squid-users] Squid plugin sponsor

2022-02-11 Thread David Touzeau

Hello

Thank you but this is not the objective and this is the reason for 
needing the "fake".
Access to Kerberos or NTLM ports of the AD, is not possible. An LDAP 
server would be present with accounts replication.

The idea is to do a silent authentication without joining the AD
We did not need the double user/password credential, only the user sent 
by the browser is required


If the user has an Active Directory session then his account is 
automatically sent without him having to take any action.
If the user is in a workgroup then the account sent will not be in the 
LDAP database and will be rejected.
I don't need to argue about the security value of this method. It saves 
us from setting up a gas factory to make a kind of HotSpot


Le 11/02/2022 à 05:55, Dieter Bloms a écrit :

Hello David,

for me it looks like you want to use kerberos authentication.
With kerberos authentication the user don't have to authenticate against
the proxy. The authentication is done in the background.

Mayb this link will help:

https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

On Thu, Feb 10, David Touzeau wrote:


Hi

What we are looking for is to retrieve a "user" token without having to ask
anything from the user.
That's why we're looking at Active Directory credentials.
Once the user account is retrieved, a helper would be in charge of checking
if the user exists in the LDAP database.
This is to avoid any connection to an Active Directory
Maybe this is impossible


Le 10/02/2022 à 05:03, Amos Jeffries a écrit :

On 10/02/22 01:43, David Touzeau wrote:

Hi

I would like to sponsor the improvement of ntlm_fake_auth to support
new protocols

ntlm_* helpers are specific to NTLM authentication. All LanManager (LM)
protocols should already be supported as well as currently possible.
NTLM is formally discontinued by MS and *very* inefficient.

NP: NTLMv2 with encryption does not *work* because that encryption step
requires secret keys the proxy is not able to know.


or go further produce a new negotiate_kerberos_auth_fake


With current Squid this helper only needs to produce an "OK" response
regardless of the input. The basic_auth_fake does that.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid plugin sponsor

2022-02-10 Thread David Touzeau

Hi

What we are looking for is to retrieve a "user" token without having to 
ask anything from the user.

That's why we're looking at Active Directory credentials.
Once the user account is retrieved, a helper would be in charge of 
checking if the user exists in the LDAP database.

This is to avoid any connection to an Active Directory
Maybe this is impossible


Le 10/02/2022 à 05:03, Amos Jeffries a écrit :

On 10/02/22 01:43, David Touzeau wrote:

Hi

I would like to sponsor the improvement of ntlm_fake_auth to support 
new protocols


ntlm_* helpers are specific to NTLM authentication. All LanManager 
(LM) protocols should already be supported as well as currently 
possible. NTLM is formally discontinued by MS and *very* inefficient.


NP: NTLMv2 with encryption does not *work* because that encryption 
step requires secret keys the proxy is not able to know.



or go further produce a new negotiate_kerberos_auth_fake



With current Squid this helper only needs to produce an "OK" response 
regardless of the input. The basic_auth_fake does that.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid plugin sponsor

2022-02-09 Thread David Touzeau

Hi

I would like to sponsor the improvement of ntlm_fake_auth to support new 
protocols or go further produce a new negotiate_kerberos_auth_fake


Who should start the challenge?

regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper development

2022-02-07 Thread David Touzeau

You are the best,
We will launch a benchmark to see the diff

Le 07/02/2022 à 16:14, Eliezer Croitoru a écrit :


Hey David,

Since the handle_stdout runs in it’s own thread it’s sole purpose is 
to send results to stdout.


If I will run the next code in a simple software without the 0.5 sleep 
time:


 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

sys.stdout.flush()

 time.sleep(0.5)

what will happen is that the software will run with 100% CPU looping 
over and over on the size of the queue

while sometimes it will spit some data to stdout.

Adding a small delay with 0.5 secs will allow some “idle” time for the 
cpu in the loop preventing it from consuming

all the CPU time.

It’s a very old technique and there are others which are more 
efficient but it’s enough to demonstrate that a simple
threaded helper is much better then any PHP code that was not meant to 
be running as a STDIN/OUT daemon/helper software.


All The Bests,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*David Touzeau 
*Sent:* Monday, February 7, 2022 02:42
*To:* Eliezer Croitoru ; 
squid-users@lists.squid-cache.org

*Subject:* Re: [squid-users] external helper development

Sorry Elizer

It was a mistake... No, your code is clean..
Impressive for the first shot
Many thanks for your example, we will run our stress tool to see the 
difference...


Just a question

Why did you send 500 milliseconds of sleep in the handle_stdoud ? Is 
it for let squid closing the pipe ?



Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :

Hey David,

Not a fully completed helper but it seems to works pretty nice and
might be better then what exist already:


https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

#!/usr/bin/env python

import sys

import time

import urllib.request

import signal

import threading

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

RUNNING = True

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;
<https://cloud1.ngtech.co.il/api/test.php>

def sig_handler(signum, frame):

sys.stderr.write("Signal is received:" + str(signum) + "\n")

    global quit

    quit = 1

    global RUNNING

    RUNNING=False

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

sys.stdout.flush()

 time.sleep(0.5)

def handle_stdin(n):

    while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

threads.append(thread)

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

threads.append(stdout_thread)

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

threads.append(stdin_thread)

while(RUNNING):

    time.sleep(3)

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users 
<mailto:squid-users-boun...@lists.squid-cache.org> *On Behalf Of
*David Touzeau
*Sent:* Friday, February 4, 2022 16:29
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid
between opening a socket, sending data, receiving data and closing
the socket unlike direct access to a regex or a memory entry even
if the calculation has alre

Re: [squid-users] external helper development

2022-02-06 Thread David Touzeau

Sorry  Elizer

It was a mistake... No, your code is clean..
Impressive for the first shot
Many thanks for your example, we will run our stress tool to see the 
difference...


Just a question

Why did you send 500 milliseconds of sleep in the handle_stdoud ? Is it 
for let squid closing the pipe ?




Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :


Hey David,

Not a fully completed helper but it seems to works pretty nice and 
might be better then what exist already:


https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

#!/usr/bin/env python

import sys

import time

import urllib.request

import signal

import threading

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

RUNNING = True

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;

def sig_handler(signum, frame):

    sys.stderr.write("Signal is received:" + str(signum) + "\n")

    global quit

    quit = 1

    global RUNNING

    RUNNING=False

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

def handle_stdin(n):

    while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

 threads.append(thread)

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

threads.append(stdout_thread)

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

threads.append(stdin_thread)

while(RUNNING):

    time.sleep(3)

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Friday, February 4, 2022 16:29
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid between 
opening a socket, sending data, receiving data and closing the socket 
unlike direct access to a regex or a memory entry even if the 
calculation has already been done.


But what surprises me the most is that we have produced a python 
plugin in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop 
and output OK )


Results are after 6k requests, squid freeze and no surf can be made as 
with PHP code we can up to 10K requests and squid is happy

really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

    def __init__(self):
    self._exiting = False
    self._cache = {}

    def exit(self):
    self._exiting = True

    def stdout(self, lineToSend):
    try:
    sys.stdout.write(lineToSend)
    sys.stdout.flush()

    except IOError as e:
    if e.errno==32:
    # Error Broken PIPE!"
    pass
    except:
    # other execpt
    pass

    def run(self):
    while not self._exiting:
    if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
    line = sys.stdin.readline()
    LenOfline=len(line)

    if LenOfline==0:
    self._exiting=True
    break

    if line[-1] == '\n':line = line[:-1]
    channel = None
    options = line.split()

    try:
    if options[0].isdigit(): channel = options.pop(0)
    except IndexError:
    self.stdout("0 OK first=ERROR\n")
    continue

    # Processing here

    try:
    self.stdout("%s

Re: [squid-users] external helper development

2022-02-06 Thread David Touzeau

Thanks Elizer !!

I have tested your code as is in /lib/squid3/external_acl_first process 
but it take 100% CPU and squid freeze requests.

Seems a crazy loop somewhere...

root 105852  0.0  0.1  73712  9256 ?    SNs  00:27   0:00 squid
squid    105854  0.0  0.3  89540 27536 ?    SN   00:27   0:00 
(squid-1) --kid squid-1
squid    105855 91.6  0.5 219764 47636 ?    SNl  00:27   2:52 python 
/lib/squid3/external_acl_first
squid    105856 91.8  0.5 219768 47672 ?    SNl  00:27   2:52 python 
/lib/squid3/external_acl_first
squid    105857 92.9  0.5 293488 47696 ?    SNl  00:27   2:54 python 
/lib/squid3/external_acl_first
squid    105858 91.8  0.6 367228 49728 ?    SNl  00:27   2:52 python 
/lib/squid3/external_acl_first


I did not find where it should be...


Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :


Hey David,

Not a fully completed helper but it seems to works pretty nice and 
might be better then what exist already:


https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

#!/usr/bin/env python

import sys

import time

import urllib.request

import signal

import threading

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

RUNNING = True

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;

def sig_handler(signum, frame):

    sys.stderr.write("Signal is received:" + str(signum) + "\n")

    global quit

    quit = 1

    global RUNNING

    RUNNING=False

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

def handle_stdin(n):

    while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

 threads.append(thread)

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

threads.append(stdout_thread)

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

threads.append(stdin_thread)

while(RUNNING):

    time.sleep(3)

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Friday, February 4, 2022 16:29
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid between 
opening a socket, sending data, receiving data and closing the socket 
unlike direct access to a regex or a memory entry even if the 
calculation has already been done.


But what surprises me the most is that we have produced a python 
plugin in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop 
and output OK )


Results are after 6k requests, squid freeze and no surf can be made as 
with PHP code we can up to 10K requests and squid is happy

really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

    def __init__(self):
    self._exiting = False
    self._cache = {}

    def exit(self):
    self._exiting = True

    def stdout(self, lineToSend):
    try:
    sys.stdout.write(lineToSend)
    sys.stdout.flush()

    except IOError as e:
    if e.errno==32:
    # Error Broken PIPE!"
    pass
    except:
    # other execpt
    pass

    def run(self):
    while not self._exiting:
    if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
    line = sys.stdin.readline()
    LenOfline=len(line)

    if LenOfline==0:
  

Re: [squid-users] external helper development

2022-02-04 Thread David Touzeau

Elizer,

Thanks for all this advice and indeed your arguments are valid between 
opening a socket, sending data, receiving data and closing the socket 
unlike direct access to a regex or a memory entry even if the 
calculation has already been done.


But what surprises me the most is that we have produced a python plugin 
in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop and 
output OK )


Results are after 6k requests, squid freeze and no surf can be made as 
with PHP code we can up to 10K requests and squid is happy

really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

    def __init__(self):
    self._exiting = False
    self._cache = {}

    def exit(self):
    self._exiting = True

    def stdout(self, lineToSend):
    try:
    sys.stdout.write(lineToSend)
    sys.stdout.flush()

    except IOError as e:
    if e.errno==32:
    # Error Broken PIPE!"
    pass
    except:
    # other execpt
    pass

    def run(self):
    while not self._exiting:
    if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
    line = sys.stdin.readline()
    LenOfline=len(line)

    if LenOfline==0:
    self._exiting=True
    break

    if line[-1] == '\n':line = line[:-1]
    channel = None
    options = line.split()

    try:
    if options[0].isdigit(): channel = options.pop(0)
    except IndexError:
    self.stdout("0 OK first=ERROR\n")
    continue

    # Processing here

    try:
    self.stdout("%s OK\n" % channel)
    except:
    self.stdout("%s ERROR first=ERROR\n" % channel)




class Main(object):
    def __init__(self):
    self._threads = []
    self._exiting = False
    self._reload = False
    self._config = ""

    for sig, action in (
    (signal.SIGINT, self.shutdown),
    (signal.SIGQUIT, self.shutdown),
    (signal.SIGTERM, self.shutdown),
    (signal.SIGHUP, lambda s, f: setattr(self, '_reload', True)),
    (signal.SIGPIPE, signal.SIG_IGN),
    ):
    try:
    signal.signal(sig, action)
    except AttributeError:
    pass



    def shutdown(self, sig = None, frame = None):
    self._exiting = True
    self.stop_threads()

    def start_threads(self):

    sThread = ClienThread()
    t = threading.Thread(target = sThread.run)
    t.start()
    self._threads.append((sThread, t))



    def stop_threads(self):
    for p, t in self._threads:
    p.exit()
    for p, t in self._threads:
    t.join(timeout = 1.0)
    self._threads = []

    def run(self):
    """ main loop """
    ret = 0
    self.start_threads()
    return ret


if __name__ == '__main__':
    # set C locale
    locale.setlocale(locale.LC_ALL, 'C')
    os.environ['LANG'] = 'C'
    ret = 0
    try:
    main = Main()
    ret = main.run()
    except SystemExit:
    pass
    except KeyboardInterrupt:
    ret = 4
    except:
    sys.exit(ret)

Le 04/02/2022 à 07:06, Eliezer Croitoru a écrit :


And about the cache of each helpers, the cost of a cache on a single 
helper is not much in terms of memory comparing to some network access.


Again it’s possible to test and verify this on a loaded system to get 
results. The delay itself can be seen from squid side in the cache 
manager statistics.


You can also try to compare the next ruby helper:

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper

About a shared “base” which allows helpers to avoid computation of the 
query…. It’s a good argument, however it depends what is the cost of

pulling from the cache compared to calculating the answer.

A very simple string comparison or regex matching would probably be 
faster than reaching a shared storage in many cases.


Also take into account the “concurrency” support from the helper side.

A helper that supports parallel processing of requests/lines can do 
better then many single helpers in more than once use case.


In any case I would suggest to enable requests concurrency from squid 
side since the STDIN buffer will emulate some level of concurrency

by itself and will allow squid to keep going forward faster.

Just to mention that SquidGuard have used a single helper cache for a 
very long time, ie every single SquidGuard helper has it’s own copy of 
the whole


configuration and database files in memory.

And again, if you do have any 

Re: [squid-users] external helper development

2022-02-03 Thread David Touzeau

Hi Elizer

You are right in a way but when squid loads multiple helpers, each 
helper will use its own cache.
Using a shared "base" allows helpers to avoid having to compute a query 
already found by another helper who already has the answer.


Concerning PHP what we find strange is that with our tests, a simple 
loop and an "echo OK", php goes faster: 1.5x than python.


Le 03/02/2022 à 07:09, Eliezer Croitoru a écrit :

Hey Andre,

Every language has a "cost" for it's qualities.
For example, Golang is a very nice language that offers a relatively simple way 
for concurrency support and cross hardware compilation/compatibility.
One cost in Golang is that the binary is in the size of an OS/Kernel.
In python you must write everything in a specific position and indentation and 
threading is not simple to implement for a novice.
However when you see what was written in Python you can see that most of 
OpenStack api's and systems are written in.. python and it means something.
I like very much ruby but it doesn't support threading by nature but supports 
"concurrency".
Squid doesn't implement threading but implements "concurrency".

Don't touch PHP as a helper!!! (+1 to Alex)

Also take into account that Redis or Memcached is less preferred in many cases 
if the library doesn't re-use the existing connection for multiple queries.
Squid also implements caching for helpers answers so it's possible to implement 
the helper and ACL's in such a way that squid caching will
help you to lower the access to the external API and or redis/memcahced/DB.
I also have good experience with some libraries which implements cache that I have used 
inside a helper with a limited size for "level 1" cache.
It's possible that if you will implement both the helper and server side of the 
solution like ufdbguard you would be able to optimize the system
to take very high load.

I hope the above will help you.
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of 
André Bolinhas
Sent: Wednesday, February 2, 2022 00:09
To: 'Alex 
Rousskov';squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

Hi
Thanks for the reply.
I will take a look on Rust as you recommend.
Also, between Python and Go and is the best for multithreading and concurrency?
Rust supports multithreading and concurrency?
Best regards

-Mensagem original-
De: squid-users  Em Nome De Alex 
Rousskov
Enviada: 1 de fevereiro de 2022 22:01
Para:squid-users@lists.squid-cache.org
Assunto: Re: [squid-users] external helper development

On 2/1/22 16:47, André Bolinhas wrote:

Hi

I’m building an external helper to get the categorization of an
website, I know how to build it, but I need you option about the best
language for the job in terms of performance, bottlenecks, I/O blocking..

The helper will work like this.

1º  will check the hot memory for faster response (memcache or redis)

2º If the result not exist in hot memory then will check an external
api to fetch the categorie and saved it in hot memory.

In what language do you recommend develop such helper? PHP, Python, Go..

If this helper is for long-term production use, and you are willing to learn 
new things, then use Rust[1]. Otherwise, use whatever language you are the most 
comfortable with already (except PHP), especially if that language has good 
libraries/wrappers for the external APIs you will need to use.

Alex.
[1]https://www.rust-lang.org/
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid url_rewrite_program how to return a kind of TCP reset

2022-01-31 Thread David Touzeau
Is adapted_http_access supporting url_rewrite_program  ? It seems only 
supports ecap/icap


Le 31/01/2022 à 03:52, Amos Jeffries a écrit :

On 31/01/22 13:20, David Touzeau wrote:

But it makes 2 connections to the squid for just stopping queries.
It seems not really optimized.



The joys of using URL modification to decide security access.



I notice that for several reasons i cannot switch to an external_acl



:(



Is there a way / idea ?



<http://www.squid-cache.org/Doc/config/adapted_http_access/>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid url_rewrite_program how to return a kind of TCP reset

2022-01-30 Thread David Touzeau

Hi

I have built my own squid url_rewrite_program

protocol requires answering with

# OK status=301|302 url=
Or
# OK rewrite-url="http://blablaba;

In my case, especially for trackers/ads i would like to say to browsers: 
"Go away !" without need them to redirect.


Sure i can use these methods but...

1) 127.0.0.1 - browser is in charge of getting out

OK status=302 url="http://127.0.0.1; But this ain't clean or polished.


2) 127.0.0.1 - Squid is in charge of getting out

OK rewrite-url="http://127.0.0.1; But this very very ain't clean or 
polished.

Squid claim in logs for an unreachable URL and pollute events


3) Redirect to a dummy page with a deny acl

OK status=302 url="http://dummy.com;
acl dummy dstdomain dummy.com
http_access deny dummy
deny_info TCP_RESET dummy

But it makes 2 connections to the squid for just stopping queries.
It seems not really optimized.

I notice that for several reasons i cannot switch to an external_acl

Is there a way / idea ?


Regards







___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] security_file_certgen I/O

2021-12-01 Thread David Touzeau


Hi

We used Squid 5.2 and we see that security_file_certgen consume I/O
Is there any way to put the ssldb in memory without need to mount a tmpfs ?

regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] %notes in error pages

2021-11-27 Thread David Touzeau

Hi

Working like a charm !!!

Many thanks!!

Le 26/11/2021 à 17:43, Alex Rousskov a écrit :

On 11/25/21 4:46 PM, David Touzeau wrote:


We need to add %note added from external helper using a deny_info and
specific squid error page.

tried with %o or %m without success

Is there a token to build an error page with an external acl helper output ?

Use @Squid{%code} syntax to add logformat %code (including %note) to
your error page. The feature is available in v5 and beyond. More details
may be available athttps://github.com/squid-cache/squid/commit/7e6eabb


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] %notes in error pages

2021-11-25 Thread David Touzeau


Hi,

We need to add %note added from external helper using a deny_info and 
specific squid error page.


tried with %o or %m without success

Is there a token to build an error page with an external acl helper output ?

Regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-23 Thread David Touzeau

Hi

According to your documentation,
cache dir rock : objects larger than 32,000 bytes cannot be cached
if aufs cannot be implemented in SMP configuration how can we handle 
larger files in cache ?


Le 23/11/2021 à 11:01, David Touzeau a écrit :

Ok thanks, we will investigate in this way

Le 22/11/2021 à 19:33, Alex Rousskov a écrit :

On 11/22/21 12:48 PM, David Touzeau wrote:

Here our SMP configuration:

workers 2

cache_dir rock /home/squid/cache/rock 0 min-size=0 max-size=131072 
slot-size=32000

if ${process_number} = 1
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=1
cache_dir    aufs    /home/squid/Caches/disk    50024    16    256 
min-size=131072 max-size=3221225472
endif

if ${process_number} = 2
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=2
endif


where is the false settings ?

I am limiting my answer to the problems in this email thread scope: aufs
cache_dirs are UFS-based cache_dirs. UFS-based cache_dirs are not
SMP-aware and are not supported in SMP configurations. Your choices include:

* drop SMP (i.e. remove "workers" and ARA)
* drop aufs (i.e. remove "cache_dir aufs" and ARA)

... where ARA is "adjust the rest of the configuration accordingly".


HTH,

Alex.



Le 22/11/2021 à 18:18, Alex Rousskov a écrit :

On 11/22/21 11:55 AM, David Touzeau wrote:


What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930:
"!transients || e.hasTransients()"
We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?

This is a Squid bug or misconfiguration. If you are using a UFS-based
cache_dir with multiple workers, then it is a misconfiguration. If you
want to use SMP disk caching, please use rock store instead.

HTH,

Alex.
P.S. This assertion has been reported several times, including for Squid
v4, but it was probably always due to a Squid misconfiguration. We need
to find a good way to explicitly reject such configurations instead of
asserting (while not rejecting similar unsupported configurations that
still "work" from their admins point of view).



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tlu.dl.delivery.mp.microsoft.com and HTTP range header

2021-11-23 Thread David Touzeau

Hi community,

tlu.dl.delivery.mp.microsoft.com is from the app store and it encounters 
an issue with high bandwidth usage.
We think that it was caused because Squid filtering the HTTP Range 
header from the HTTP requests.

This caused the app store download everything in an endless loop

We know that Squid is not currently compatible with http range :
https://wiki.squid-cache.org/Features/HTTP11#Range_Requests

Is there any workaround in order to avoid high bandwidth usage of 
Microsoft clients without needing caching objects ?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-23 Thread David Touzeau

Ok thanks, we will investigate in this way

Le 22/11/2021 à 19:33, Alex Rousskov a écrit :

On 11/22/21 12:48 PM, David Touzeau wrote:

Here our SMP configuration:

workers 2

cache_dir rock /home/squid/cache/rock 0 min-size=0 max-size=131072 
slot-size=32000

if ${process_number} = 1
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=1
cache_dir    aufs    /home/squid/Caches/disk    50024    16    256 
min-size=131072 max-size=3221225472
endif

if ${process_number} = 2
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=2
endif


where is the false settings ?

I am limiting my answer to the problems in this email thread scope: aufs
cache_dirs are UFS-based cache_dirs. UFS-based cache_dirs are not
SMP-aware and are not supported in SMP configurations. Your choices include:

* drop SMP (i.e. remove "workers" and ARA)
* drop aufs (i.e. remove "cache_dir aufs" and ARA)

... where ARA is "adjust the rest of the configuration accordingly".


HTH,

Alex.



Le 22/11/2021 à 18:18, Alex Rousskov a écrit :

On 11/22/21 11:55 AM, David Touzeau wrote:


What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930:
"!transients || e.hasTransients()"
We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?

This is a Squid bug or misconfiguration. If you are using a UFS-based
cache_dir with multiple workers, then it is a misconfiguration. If you
want to use SMP disk caching, please use rock store instead.

HTH,

Alex.
P.S. This assertion has been reported several times, including for Squid
v4, but it was probably always due to a Squid misconfiguration. We need
to find a good way to explicitly reject such configurations instead of
asserting (while not rejecting similar unsupported configurations that
still "work" from their admins point of view).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-22 Thread David Touzeau

Here our SMP configuration:

workers 2

cache_dir rock /home/squid/cache/rock 0 min-size=0 max-size=131072 
slot-size=32000

if ${process_number} = 1
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=1
cache_dir    aufs    /home/squid/Caches/disk    50024    16    256 
min-size=131072 max-size=3221225472
endif

if ${process_number} = 2
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=2
endif


where is the false settings ?
Missing cache_dir ?


Le 22/11/2021 à 18:18, Alex Rousskov a écrit :

On 11/22/21 11:55 AM, David Touzeau wrote:


What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930:
"!transients || e.hasTransients()"
We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?

This is a Squid bug or misconfiguration. If you are using a UFS-based
cache_dir with multiple workers, then it is a misconfiguration. If you
want to use SMP disk caching, please use rock store instead.

HTH,

Alex.
P.S. This assertion has been reported several times, including for Squid
v4, but it was probably always due to a Squid misconfiguration. We need
to find a good way to explicitly reject such configurations instead of
asserting (while not rejecting similar unsupported configurations that
still "work" from their admins point of view).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-22 Thread David Touzeau

Hi, community

What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930: 
"!transients || e.hasTransients()"

    current master transaction: master69


We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Stable Squid Version for production on Linux

2021-11-16 Thread David Touzeau

Hi,

For us it is Squid v4.17

Le 16/11/2021 à 17:40, Graminsta a écrit :


Hey folks  ;)

What is the most stable squid version for production on Ubuntu 18 or 20?

Marcelo


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.2: ntlm_fake_auth refuse to valid credentials

2021-11-16 Thread David Touzeau

Any tips,

Is someone using Fake NTLM with modern browsers ?

Le 11/11/2021 à 13:16, David Touzeau a écrit :

Thanks Amos it will help understand something

I think modern browser sending NTLMv2 as the ntlm_fake_auth 
understanding only NTLMv1 ( perhaps )


Using curl with --proxy-ntlm option is OK for squid as using browser 
return allways a 407

DO you know the limitation of ntlm_fake_auth according NTLM version.
Is there a way to fix it ?

* CURL 

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 06 82 08 00  NTLMSSP. 


[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 06 82 08 00  15 3A CC 83 0B 80 7B 45  ...E
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'KK' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  03 00 00 00 18 00 18 00 NTLMSSP. 
[0010]  40 00 00 00 30 00 30 00  58 00 00 00 00 00 00 00 0.0. X...
[0020]  88 00 00 00 04 00 04 00  88 00 00 00 09 00 09 00  
[0030]  8C 00 00 00 00 00 00 00  00 00 00 00 06 82 08 00  
[0040]  EB C7 B7 11 26 62 FD 82  B0 45 68 62 E0 6C E6 A3 .b.. .Ehb.l..
[0050]  57 A7 E6 76 1C 7B 79 74  17 71 72 5B 72 38 DA 30 W..v..yt .qr.r8.0
[0060]  06 4D 15 1F 9B D1 A2 A5  01 01 00 00 00 00 00 00 .M.. 
[0070]  80 38 3C 2A EA D6 D7 01  57 A7 E6 76 1C 7B 79 74 .8.. W..v..yt
[0080]  00 00 00 00 00 00 00 00  74 6F 74 6F 6E 74 6C 6D  totontlm
[0090]  70 72 6F 78 79 proxy
ntlmauth.cc(244): pid=31874 :ntlm_unpack_auth: size of 149
ntlmauth.cc(245): pid=31874 :ntlm_unpack_auth: flg 00088206
ntlmauth.cc(246): pid=31874 :ntlm_unpack_auth: lmr o(64) l(24)
ntlmauth.cc(247): pid=31874 :ntlm_unpack_auth: ntr o(88) l(48)
ntlmauth.cc(248): pid=31874 :ntlm_unpack_auth: dom o(136) l(0)
ntlmauth.cc(249): pid=31874 :ntlm_unpack_auth: usr o(136) l(4)
ntlmauth.cc(250): pid=31874 :ntlm_unpack_auth: wst o(140) l(9)
ntlmauth.cc(251): pid=31874 :ntlm_unpack_auth: key o(0) l(0)
ntlmauth.cc(257): pid=31874 :ntlm_unpack_auth: Domain 't' (len=1).
*ntlmauth.cc(268): pid=31874 :ntlm_unpack_auth: Username 'toton' (len=5).*
ntlm_fake_auth.cc(210): pid=31874 :sending 'AF toton' to squid


* But when connecting any modern browser to squid ***

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2  NTLMSSP. 


[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  C9 F0 4C 07 E0 79 9F CF  ..L..y..
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  49 12 A5 8A C8 17 3E 9D  I...
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  09 6D 48 E6 12 9C 4B 30  .mH...K0
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  F5 F6 8C B4 16 B9 20 CD  
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU



Le 11/11/2021 à 08:40, Amos Jeffries a écrit :

On 11/11/21 14:12, David Touzeau wrote:

Hi,
i would like to use ntlm_fake_auth but it seems

Re: [squid-users] Squid 5.2 unstable in production mode

2021-11-11 Thread David Touzeau

Hi

Max filedescriptors is defined in squid.conf.
Yes, in some cases a c-icap was installed and the proxy became more 
stable for a while.

But Filedescriptors issue still unstable... I really did not know why.


Running Debian 11 is very difficult it is a very new OS and we consider 
debian 10 as currently stable .

Also the Squid 4 working very well on Debian 10


Le 11/11/2021 à 20:58, Flashdown a écrit :

Hi David,

well I am curious, where did you set the max filedescriptors? Only in 
the OS configuration? If so, you also need to define it in the 
squid.conf as well -> 
http://www.squid-cache.org/Versions/v5/cfgman/max_filedescriptors.html


Regarding the memory leak, do you use an adaption service such as c-icap?
If so, what is the result of: ss -ant | grep CLOSE_WAIT | wc -l

May you should try to build Squid5 against Debian 11 to have the 
latest version of any dependencies needed to see if the memory leak is 
gone or not.


I run multiple Squid 5.2 servers on Debian 11 in production and do not 
have any issues.

---
Best regards,
Enrico Heine

Am 2021-11-11 20:08, schrieb David Touzeau:

Hi

Just for information and i hope it will help.

We have installed Squid 5.1 and Squid 5.2 in production mode.
It seems that after several days, the Squid become very unstable.
We mention that when switching to 4.x we did not encounter these
errors with the same configuration, same users, same network ( replace
binaries and keep same configuration )

All production servers are installed in a virtual environment ( ESXI
or Nutanix ) on Debian 10.x with about 4 to 8 vCPUs and 8GB of memory.
and from 20 to 5000 users.

After severals tests we see that the number of users did not have
impact with the stability.
We encounter same errors on a 20 users proxy and the same way of a
5000 users proxy.

1) Memory leak
-
This was encounter on computer that handle more than 10Gb of memory,
squid eat more than 8Gb of memory.
After eating all memory, squid is unable to load helpers and freeze
listen ports.
A restart service free the memory and fix the issue.

2) Max filedescriptors issues:

This is a strange behavior that Squid did not accept defined
parameter:
Example we set 65535 filedescriptors but squidclient mgr:info report
4096 and sometimes return back to 1024.

Several times squid report

    current master transaction: master15881
2021/11/11 17:10:09 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:10:29 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:10:51 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:11:56 kid1| TCP connection to 127.0.0.1/2320 failed
    current master transaction: master15881
2021/11/11 17:13:02 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:13:19 kid1| WARNING! Your cache is running out of
filedescriptors

But a mgr:info report:

    memPoolFree calls:    4295601
File descriptor usage for squid:
    Maximum number of file descriptors:   10048
    Largest file desc currently in use:    262
    Number of file desc currently in use:  135
    Files queued for open:   0
    Available number of file descriptors: 9913
    Reserved number of file descriptors:  9789

After these errors the listen port is freeze and nobody is able to
surf.
a just "squid -k reconfigure" fix the issue and the proxy return to
normal mode for several minutes and back again to filedescriptors
issues.

There is no relationship between filedescriptors issues and the number
of clients.
Sometimes the issue is discovered during the night when there is no
user that using the proxy ( just some robots like windows update )

Is there something other we can investigate to help more stability of
the 5.x branch ?
Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.2 unstable in production mode

2021-11-11 Thread David Touzeau

Hi

Just for information and i hope it will help.

We have installed Squid 5.1 and Squid 5.2 in production mode.
It seems that after several days, the Squid become very unstable.
We mention that when switching to 4.x we did not encounter these errors 
with the same configuration, same users, same network ( replace binaries 
and keep same configuration )


All production servers are installed in a virtual environment ( ESXI or 
Nutanix ) on Debian 10.x with about 4 to 8 vCPUs and 8GB of memory.

and from 20 to 5000 users.

After severals tests we see that the number of users did not have impact 
with the stability.
We encounter same errors on a 20 users proxy and the same way of a 5000 
users proxy.



1) Memory leak
-
This was encounter on computer that handle more than 10Gb of memory, 
squid eat more than 8Gb of memory.
After eating all memory, squid is unable to load helpers and freeze 
listen ports.

A restart service free the memory and fix the issue.

2) Max filedescriptors issues:

This is a strange behavior that Squid did not accept defined parameter:
Example we set 65535 filedescriptors but squidclient mgr:info report 
4096 and sometimes return back to 1024.


Several times squid report

    current master transaction: master15881
2021/11/11 17:10:09 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:10:29 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:10:51 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:11:56 kid1| TCP connection to 127.0.0.1/2320 failed
    current master transaction: master15881
2021/11/11 17:13:02 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:13:19 kid1| WARNING! Your cache is running out of 
filedescriptors


But a mgr:info report:

    memPoolFree calls:    4295601
File descriptor usage for squid:
    Maximum number of file descriptors:   10048
    Largest file desc currently in use:    262
    Number of file desc currently in use:  135
    Files queued for open:   0
    Available number of file descriptors: 9913
    Reserved number of file descriptors:  9789

After these errors the listen port is freeze and nobody is able to surf.
a just "squid -k reconfigure" fix the issue and the proxy return to 
normal mode for several minutes and back again to filedescriptors issues.


There is no relationship between filedescriptors issues and the number 
of clients.
Sometimes the issue is discovered during the night when there is no user 
that using the proxy ( just some robots like windows update )





Is there something other we can investigate to help more stability of 
the 5.x branch ?

Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.2: ntlm_fake_auth refuse to valid credentials

2021-11-11 Thread David Touzeau

Thanks Amos it will help understand something

I think modern browser sending NTLMv2 as the ntlm_fake_auth 
understanding only NTLMv1 ( perhaps )


Using curl with --proxy-ntlm option is OK for squid as using browser 
return allways a 407

DO you know the limitation of ntlm_fake_auth according NTLM version.
Is there a way to fix it ?

* CURL 

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 06 82 08 00  NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 06 82 08 00  15 3A CC 83 0B 80 7B 45  ...E
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'KK' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  03 00 00 00 18 00 18 00 NTLMSSP. 
[0010]  40 00 00 00 30 00 30 00  58 00 00 00 00 00 00 00 0.0. X...
[0020]  88 00 00 00 04 00 04 00  88 00 00 00 09 00 09 00  
[0030]  8C 00 00 00 00 00 00 00  00 00 00 00 06 82 08 00  
[0040]  EB C7 B7 11 26 62 FD 82  B0 45 68 62 E0 6C E6 A3 .b.. .Ehb.l..
[0050]  57 A7 E6 76 1C 7B 79 74  17 71 72 5B 72 38 DA 30 W..v..yt .qr.r8.0
[0060]  06 4D 15 1F 9B D1 A2 A5  01 01 00 00 00 00 00 00 .M.. 
[0070]  80 38 3C 2A EA D6 D7 01  57 A7 E6 76 1C 7B 79 74 .8.. W..v..yt
[0080]  00 00 00 00 00 00 00 00  74 6F 74 6F 6E 74 6C 6D  totontlm
[0090]  70 72 6F 78 79 proxy
ntlmauth.cc(244): pid=31874 :ntlm_unpack_auth: size of 149
ntlmauth.cc(245): pid=31874 :ntlm_unpack_auth: flg 00088206
ntlmauth.cc(246): pid=31874 :ntlm_unpack_auth: lmr o(64) l(24)
ntlmauth.cc(247): pid=31874 :ntlm_unpack_auth: ntr o(88) l(48)
ntlmauth.cc(248): pid=31874 :ntlm_unpack_auth: dom o(136) l(0)
ntlmauth.cc(249): pid=31874 :ntlm_unpack_auth: usr o(136) l(4)
ntlmauth.cc(250): pid=31874 :ntlm_unpack_auth: wst o(140) l(9)
ntlmauth.cc(251): pid=31874 :ntlm_unpack_auth: key o(0) l(0)
ntlmauth.cc(257): pid=31874 :ntlm_unpack_auth: Domain 't' (len=1).
*ntlmauth.cc(268): pid=31874 :ntlm_unpack_auth: Username 'toton' (len=5).*
ntlm_fake_auth.cc(210): pid=31874 :sending 'AF toton' to squid


* But when connecting any modern browser to squid ***

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2  NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  C9 F0 4C 07 E0 79 9F CF  ..L..y..
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  49 12 A5 8A C8 17 3E 9D  I...
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  09 6D 48 E6 12 9C 4B 30  .mH...K0
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  F5 F6 8C B4 16 B9 20 CD  
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU



Le 11/11/2021 à 08:40, Amos Jeffries a écrit :

On 11/11/21 14:12, David Touzeau wrote:

Hi,
i would like to use ntlm_fake_auth but it seems Squid refuse to 
switch to authenticated user and return a 407 to the browser and 
squid never accept

[squid-users] squid 5.2: ntlm_fake_auth refuse to valid credentials

2021-11-10 Thread David Touzeau

Hi,
i would like to use ntlm_fake_auth but it seems Squid refuse to switch 
to authenticated user and return a 407 to the browser and squid never 
accept  credentials.


What i missing ?

Configuration seems simple:
auth_param ntlm program /lib/squid3/ntlm_fake_auth -v
auth_param ntlm children 20 startup=5 idle=1 concurrency=0 queue-size=80 
on-persistent-overload=ERR

acl AUTHENTICATED proxy_auth REQUIRED
http_access deny  !AUTHENTICATED

Here the debug mode;

2021/11/11 01:36:16.862 kid1| 14,3| ipcache.cc(614) 
ipcache_gethostbyname: ipcache_gethostbyname: 'www.squid-cache.org', flags=1
2021/11/11 01:36:16.862 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
'212.199.163.170' NOT found
2021/11/11 01:36:16.862 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
'196.200.160.70' NOT found
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
NetworksBlackLists = 0
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access#29 = 0
2021/11/11 01:36:16.862 kid1| 28,5| Checklist.cc(397) bannedAction: 
Action 'DENIED/0' is not banned
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
http_access#30
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
NormalPorts
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(212) append: from c-string 
to id SBuf1021843
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 13 
for SBuf1021843
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf1021843 
new store capacity: 40
2021/11/11 01:36:16.862 kid1| 28,3| StringData.cc(33) match: 
aclMatchStringList: checking 'MyPortNameID1'
2021/11/11 01:36:16.862 kid1| 28,3| StringData.cc(36) match: 
aclMatchStringList: 'MyPortNameID1' found
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
NormalPorts = 1
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
!AUTHENTICATED
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
AUTHENTICATED
2021/11/11 01:36:16.862 kid1| 29,4| UserRequest.cc(354) authenticate: No 
connection authentication type
2021/11/11 01:36:16.862 kid1| 29,5| User.cc(36) User: Initialised 
auth_user '0x5570e8c4d240'.
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(99) UserRequest: 
initialised request 0x5570e8cdacf0
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(212) append: from c-string 
to id SBuf1021846
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 61 
for SBuf1021846
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf1021846 
new store capacity: 128
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 33,2| client_side.cc(507) setAuth: Adding 
connection-auth to local=192.168.90.170:3128 remote=192.168.90.10:50746 
FD 12 flags=1 from new NTLM handshake request
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 28,3| AclProxyAuth.cc(131) checkForAsync: 
checking password via authenticator
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 84,5| helper.cc(1292) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: Running servers 5
2021/11/11 01:36:16.862 kid1| 84,5| helper.cc(1309) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: returning srv-Hlpr66
2021/11/11 01:36:16.862 kid1| 5,5| AsyncCall.cc(26) AsyncCall: The 
AsyncCall helperStatefulDispatchWriteDone constructed, 
this=0x5570e8c8f8e0 [call581993]
2021/11/11 01:36:16.862 kid1| 5,5| Write.cc(35) Write: local=[::] 
remote=[::] FD 10 flags=1: sz 60: asynCall 0x5570e8c8f8e0*1
2021/11/11 01:36:16.862 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 10, 
type=2, handler=1, client_data=0x7f9e5d8a75a8, timeout=0
2021/11/11 01:36:16.862 kid1| 84,5| helper.cc(1430) 
helperStatefulDispatch: helperStatefulDispatch: Request sent to 
ntlmauthenticator #Hlpr66, 60 bytes
2021/11/11 01:36:16.862 kid1| 28,4| Acl.cc(72) AuthenticateAcl: 
returning 2 sending credentials to helper.
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
AUTHENTICATED = -1 async
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
!AUTHENTICATED = -1 async
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access#30 = -1 async
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access = -1 async
2021/11/11 01:36:16.862 kid1| 33,4| Server.cc(90) readSomeData: 
local=192.168.90.170:3128 remote=192.168.90.10:50746 FD 12 flags=1: 
reading request...
2021/11/11 01:36:16.862 kid1| 33,5| AsyncCall.cc(26) AsyncCall: The 
AsyncCall Server::doClientRead constructed, this=0x5570e87cfd50 [call581994]
2021/11/11 01:36:16.862 kid1| 5,5| Read.cc(57) comm_read_base: 
comm_read, queueing read for local=192.168.90.170:3128 

Re: [squid-users] Squid 5.2 Peer parent TCP connection to x.x.x.x/x failed

2021-11-02 Thread David Touzeau
Ok, we will investigate on the Parent Proxy but it seems that when squid 
child claim about TCP failed, it understand that the peer is dead and 
the whole surf is stopped during several times ( a squid -k reconfigure  
fix the issue quickly  ) because it did not have any other path to 
forward the request.






Le 02/11/2021 à 16:17, Alex Rousskov a écrit :

On 11/2/21 10:40 AM, David Touzeau wrote:

2021/11/01 16:50:48.787 kid1| 93,3| Http::Tunneler::handleReadyRead(conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT)
2021/11/01 16:50:48.787 kid1| 74,5| parse: status-line: proto HTTP/1.1
2021/11/01 16:50:48.787 kid1| 74,5| parse: status-line: status-code 503
2021/11/01 16:50:48.787 kid1| 74,5| parse: status-line: reason-phrase Service 
Unavailable
Server: squid
Date: Mon, 01 Nov 2021 15:50:48 GMT
X-Squid-Error: ERR_CONNECT_FAIL 110
2021/11/01 16:50:48.787 kid1| 83,3| bailOnResponseError: unsupported CONNECT 
response status code
2021/11/01 16:50:48.787 kid1| TCP connection to 127.0.0.1/2320 failed


A parent[^1] proxy is a Squid proxy that cannot connect to the server in
question. That Squid proxy responds with an HTTP 503 Error to your Squid
CONNECT request. Your Squid logs the "TCP connection to ... failed"
error that you were wondering about.

This sequence highlights a deficiency in Squid CONNECT error handling
code (and possibly cache_peer configuration abilities). Ideally, Squid
should recognize Squid error responses coming from a parent HTTP proxy
and avoid complaining about remote Squid-origin errors as if they are
local Squid-parent errors. IIRC, some folks still insist on Squid
complaining about the latter "within hierarchy" errors, but the former
"external Squid-origin" errors are definitely not supposed to be
reported to admins via level-0/1 messages in cache.log.


HTH,

Alex.

[^1]: Direct or indirect parent -- I could not tell quickly but you
should be able to tell by looking at addresses, configurations, and/or
access logs. My bet is that it is an indirect parent if you are still
using a load balancer between Squids.




Le 01/11/2021 à 15:53, Alex Rousskov a écrit :

On 11/1/21 7:55 AM, David Touzeau wrote:


The Squid uses the loopback as a parent.

The same problem occurs:
06:19:47 kid1| TCP connection to 127.0.0.1/2320 failed
06:15:13 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:41 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:38 kid1| TCP connection to 127.0.0.1/2320 failed
06:13:15 kid1| TCP connection to 127.0.0.1/2320 failed
06:11:23 kid1| TCP connection to 127.0.0.1/2320 failed
cache_peer 127.0.0.1 parent 2320 0 name=Peer11 no-query default
connect-timeout=3 connect-fail-limit=5 no-tproxy

It is impossible to tell for sure what is going on because Squid does
not (unfortunately; yet) report the exact reason behind these connection
establishment failures or even the context in which a failure has
occurred. You may be able to tell more by collecting/analyzing packet
captures. Developers may be able to tell more if you share, say, ALL,5
debugging logs that show what led to the failure report.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2 Peer parent TCP connection to x.x.x.x/x failed

2021-11-02 Thread David Touzeau

Hi,

Take time to enable the debug log an parsing the 10GB of logs

Here the piece of code:

2021/11/01 16:50:48.786 kid1| 33,5| AsyncCall.cc(30) AsyncCall: The 
AsyncCall Server::clientWriteDone constructed, this=0x55849cb132b0 
[call252226641]
2021/11/01 16:50:48.786 kid1| 5,5| Write.cc(37) Write: conn9813869 
local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 flags=1: sz 4529: 
asynCall 0x55849cb132b0*1
2021/11/01 16:50:48.786 kid1| 5,5| ModEpoll.cc(118) SetSelect: FD 95, 
type=2, handler=1, client_data=0x7f1caaa1a2d0, timeout=0
2021/11/01 16:50:48.786 kid1| 20,3| store.cc(467) unlock: 
store_client::copy unlocking key 115EFC0099150100 
e:=sXIV/0x55849dfec190*4
2021/11/01 16:50:48.786 kid1| 20,3| store.cc(467) unlock: 
ClientHttpRequest::doCallouts-sslBumpNeeded unlocking key 
115EFC0099150100 e:=sXIV/0x55849dfec190*3
2021/11/01 16:50:48.786 kid1| 28,4| FilledChecklist.cc(67) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x55849316fc88
2021/11/01 16:50:48.786 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x55849316fc88
2021/11/01 16:50:48.786 kid1| 84,5| helper.cc(1319) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: Running servers 4
2021/11/01 16:50:48.786 kid1| 84,5| helper.cc(1344) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: returning srv-Hlpr469
2021/11/01 16:50:48.786 kid1| 5,4| AsyncCall.cc(30) AsyncCall: The 
AsyncCall helperStatefulHandleRead constructed, this=0x55848ad88730 
[call252226642]
2021/11/01 16:50:48.786 kid1| 5,5| Read.cc(58) comm_read_base: 
comm_read, queueing read for conn9811325 local=[::] remote=[::] FD 49 
flags=1; asynCall 0x55848ad88730*1
2021/11/01 16:50:48.786 kid1| 5,5| ModEpoll.cc(118) SetSelect: FD 49, 
type=1, handler=1, client_data=0x7f1caaa18a20, timeout=0
2021/11/01 16:50:48.786 kid1| 5,4| AsyncCallQueue.cc(61) fireNext: 
leaving helperStatefulHandleRead(conn9811325 local=[::] remote=[::] FD 
49 flags=1, data=0x5584982781c8, size=300, buf=0x558498dde700)
2021/11/01 16:50:48.786 kid1| 1,5| CodeContext.cc(60) Entering: 
master25501192
2021/11/01 16:50:48.786 kid1| 5,3| IoCallback.cc(112) finish: called for 
conn9812727 local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT 
FD 85 flags=1 (0, 0)
2021/11/01 16:50:48.786 kid1| 93,3| AsyncCall.cc(97) ScheduleCall: 
IoCallback.cc(131) will call Http::Tunneler::handleReadyRead(conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT FD 85 
flags=1, data=0x55849b747e18) [call252202273]
2021/11/01 16:50:48.786 kid1| 5,5| Write.cc(69) HandleWrite: conn9813869 
local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 flags=1: off 0, 
sz 4529.
2021/11/01 16:50:48.786 kid1| 5,5| Write.cc(89) HandleWrite: write() 
returns 4529
2021/11/01 16:50:48.787 kid1| 5,3| IoCallback.cc(112) finish: called for 
conn9813869 local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 
flags=1 (0, 0)
2021/11/01 16:50:48.787 kid1| 33,5| AsyncCall.cc(97) ScheduleCall: 
IoCallback.cc(131) will call Server::clientWriteDone(conn9813869 
local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 flags=1, 
data=0x55849e4c8218) [call252226641]
2021/11/01 16:50:48.787 kid1| 1,5| CodeContext.cc(60) Entering: 
master25501192
2021/11/01 16:50:48.787 kid1| 93,3| AsyncCallQueue.cc(59) fireNext: 
entering Http::Tunneler::handleReadyRead(conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT FD 85 
flags=1, data=0x55849b747e18)
2021/11/01 16:50:48.787 kid1| 93,3| AsyncCall.cc(42) make: make call 
Http::Tunneler::handleReadyRead [call252202273]
2021/11/01 16:50:48.787 kid1| 93,3| AsyncJob.cc(123) callStart: 
Http::Tunneler status in: [state:w FD 85 job26507207]
2021/11/01 16:50:48.787 kid1| 5,3| Read.cc(93) ReadNow: conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT FD 85 
flags=1, size 65535, retval 7782, errno 0

2021/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 1 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
3 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
1 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
19 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
2 bytes
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(224) parse: 
status-line: retval 1
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(225) parse: 
status-line: proto HTTP/1.1
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(226) parse: 
status-line: status-code 503
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(227) parse: 
status-line: reason-phrase Service Unavailable
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(228) parse: 
Parser: bytes processed=34
2021/11/01 16:50:48.787 kid1| 74,5| Parser.cc(192) grabMimeBlock: mime 
header (0-171) {Server: squid^M

Mime-Version: 1.0^M
Date: Mon, 01 Nov 2021 15:50:48 GMT^M
Content-Type: text/html;charset=utf-8^M
Content-Length: 7577^M

[squid-users] Squid 5.2 Peer parent TCP connection to x.x.x.x/x failed

2021-11-01 Thread David Touzeau

Hello Community,

We use child Squid proxies that connect to boxes that act as parents.
In version 4.x this configuration does not pose any problem.
In version 5.2, since, we have a lot of errors like :

01h 47mn kid1| TCP connection to 10.32.0.18/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed

However we are sure that the parent proxies are available.
To make sure this is the case, we installed a local HaProxy that scales 
with the parent proxies


The Squid uses the loopback as a parent.

The same problem occurs:
06:19:47 kid1| TCP connection to 127.0.0.1/2320 failed
06:15:13 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:41 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:38 kid1| TCP connection to 127.0.0.1/2320 failed
06:13:15 kid1| TCP connection to 127.0.0.1/2320 failed
06:11:23 kid1| TCP connection to 127.0.0.1/2320 failed

But in no case the local HaProxy service was down

This makes us understand that the parent squid process randomly stalls 
when in fact there is no reason for this to happen.

There is a software problem rather than a network problem

It is possible that the configuration is wrong but we have tried many 
possibilities.


Here is our last configuration

cache_peer 127.0.0.1 parent 2320 0 name=Peer11 no-query default 
connect-timeout=3 connect-fail-limit=5 no-tproxy


maybe we forgot something?

Regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.1 memory usage

2021-10-08 Thread David Touzeau

Hi
Just to mention, we discover high memory usage too without ICAP and SSL bump
after several days, need to restart the service.

Le 08/10/2021 à 10:56, Steve Hill a écrit :


I'm seeing high memory usage on Squid 5.1.  Caching is disabled, so 
I'd expect memory usage to be fairly low (and it was under Squid 3.5), 
but some workers are growing pretty large.  I'm using ICAP and SSL bump.


I've got a worker using 5 GB which I've collected memory stats from - 
the things which stand out are:

 - Long Strings: 220 MB
 - Short Strings: 2.1 GB
 - Comm::Connection: 217 MB
 - HttpHeaderEntry: 777 MB
 - MemBlob: 773 MB
 - Entry: 226 MB

What's the best way of debugging this?  It there a way to list all of 
the Comm::Connection objects?


Thanks.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: Kerberos: Unable to switch to basic auth with Edge - IE - Chrome

2021-09-21 Thread David Touzeau

Thanks amos !!

I think auth_schemes can be a workaround.
I will try it !



Le 21/09/2021 à 02:49, Amos Jeffries a écrit :

On 21/09/21 11:49 am, David Touzeau wrote:


When edge, chrome and IE try to establish a session, Squid claim

2021/09/21 01:17:27 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: received type 1 NTLM token; }}


This let us understanding that these 3 browsers try NTLM instead of a 
Basic Authentication.


I did not know why these browsers using NTLM as they did not 
connected to the Windows domain


Unlike Kerberos, NTLM does not require the machine to be connected to 
a domain to have credentials. AFAIK the browser still has access to 
the localhost user credentials for use in NTLM. Or the machine may 
even be trying to use the Basic auth credentials as LM tokens with 
NTLM scheme.




Why squid never get the Basic Authentication credentials. ?



That is a Browser decision. All Squid can do is offer the schemes it 
supports and they have to choose which is used.



Did i miss something ?


With Squid-5 you can use the auth_schemes directive to workaround 
issues like this.

 <http://www.squid-cache.org/Versions/v5/cfgman/auth_schemes.html>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: Kerberos: Unable to switch to basic auth with Edge - IE - Chrome

2021-09-21 Thread David Touzeau
Thanks Louis for this tips but we did not want to use NTLM as it is an 
old way.

It requires a samba on the Squid Box

As Amos said, this is most a browser (that using Microsoft API ) issue

The best way is to make these browsers replicating the correct Firefox 
behavior.

Means swith to basic auth instead of trying this stupid NTLM method

Le 21/09/2021 à 09:38, L.P.H. van Belle a écrit :


in your smb.conf add
 # Added to enforced NTLM 2, must be set on all Samba AD-DC's and the 
needed members.
 # This is used in combination with ntlm_auth --allow-mschapv2
 ntlm auth = mschapv2-and-ntlmv2-only

In squid use:
auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth \
 --kerberos /usr/lib/squid/negotiate_kerberos_auth -k 
/etc/squid/krb5-squid-HTTP.keytab \
 -s HTTP/proxy.fq.dn@my.realm.tld \
 --ntlm /usr/bin/ntlm_auth --allow-mschapv2 --helper-protocol=gss-spnego 
--domain=ADDOM

  
If you connecting for ldap.. Dont use -h 192.168.90.10

Uses -H ldaps://host.name.fq.dn

Also push the root-CA off the domain to pc's with GPO for example
And in that GPO you can set the parts you need to enable for the users/pcs to 
make it all work.

But your close, your almost there..

On thing i have not looked at myself yet, ext_kerberos_ldap_group_acl
https://fossies.org/linux/squid/src/acl/external/kerberos_ldap_group/ext_kerberos_ldap_group_acl.8
Thats one i'll be using with squid 5.1, im still compiling everyting i need, 
but then im setting
It up, i'll document it and make and howto of it.

Greetz,

Louis





Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
Namens David Touzeau
Verzonden: dinsdag 21 september 2021 1:49
Aan: squid-users@lists.squid-cache.org
Onderwerp: [squid-users] squid 5.1: Kerberos: Unable to switch to basic 
auth with Edge - IE - Chrome


Hi all

i have setup Kerberos authentication with Windows 2019 domain using 
Squid 5.1 ( The Squid version did not fix the issue - Tested 4.x and 5.x)
In some cases, some computers are not joined to the domain and ween 
need to allow authenticate on Squid

To allow this,  Basic Authentication is defined in Squid  and we expect 
that browsers prompt a login to be authenticated and access to Internet

But the behavior is strange.

On a computer outside the windows domain:
Firefox is be able to be successfully authenticated to squid using 
basic auth.
Edge, Chrome and IE still try ujsing NTLM method and are allways 
rejected with a 407

When edge, chrome and IE try to establish a session, Squid claim

2021/09/21 01:17:27 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: received type 1 NTLM token; }}

This let us understanding that these 3 browsers try NTLM instead of a 
Basic Authentication.

I did not know why these browsers using NTLM as they did not connected 
to the Windows domain
Why squid never get the Basic Authentication credentials. ?

Did i miss something ?

Here it is my configuration.

auth_param negotiate program /lib/squid3/negotiate_kerberos_auth -r -s 
GSS_C_NO_NAME -k /etc/squid3/PROXY.keytab
auth_param negotiate children 20 startup=5 idle=1 concurrency=0 
queue-size=80 on-persistent-overload=ERR
auth_param negotiate keep_alive on

auth_param basic program /lib/squid3/basic_ldap_auth -v -R -b "DC=articatech,DC=int" -D 
"administra...@articatech.int" <mailto:administra...@articatech.int>  -W 
/etc/squid3/ldappass.txt -f sAMAccountName=%s -v 3 -h 192.168.90.10
auth_param basic children 3
auth_param basic realm Active Directory articatech.int
auth_param basic credentialsttl 7200 seconds
authenticate_ttl 3600 seconds
authenticate_ip_ttl 1 seconds
authenticate_cache_garbage_interval 3600 seconds

acl AUTHENTICATED proxy_auth REQUIRED




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 5.1: Kerberos: Unable to switch to basic auth with Edge - IE - Chrome

2021-09-20 Thread David Touzeau

Hi all

i have setup Kerberos authentication with Windows 2019 domain using 
Squid 5.1 ( The Squid version did not fix the issue - Tested 4.x and 5.x)
In some cases, some computers are not joined to the domain and ween need 
to allow authenticate on Squid


To allow this,  Basic Authentication is defined in Squid  and we expect 
that browsers prompt a login to be authenticated and access to Internet


But the behavior is strange.

On a computer outside the windows domain:
Firefox is be able to be successfully authenticated to squid using basic 
auth.
Edge, Chrome and IE still try ujsing NTLM method and are allways 
rejected with a 407


When edge, chrome and IE try to establish a session, Squid claim

2021/09/21 01:17:27 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: received type 1 NTLM token; }}


This let us understanding that these 3 browsers try NTLM instead of a 
Basic Authentication.


I did not know why these browsers using NTLM as they did not connected 
to the Windows domain

Why squid never get the Basic Authentication credentials. ?

Did i miss something ?

Here it is my configuration.

auth_param negotiate program /lib/squid3/negotiate_kerberos_auth -r -s 
GSS_C_NO_NAME -k /etc/squid3/PROXY.keytab
auth_param negotiate children 20 startup=5 idle=1 concurrency=0 
queue-size=80 on-persistent-overload=ERR

auth_param negotiate keep_alive on

auth_param basic program /lib/squid3/basic_ldap_auth -v -R -b 
"DC=articatech,DC=int" -D "administra...@articatech.int" -W 
/etc/squid3/ldappass.txt -f sAMAccountName=%s -v 3 -h 192.168.90.10

auth_param basic children 3
auth_param basic realm Active Directory articatech.int
auth_param basic credentialsttl 7200 seconds
authenticate_ttl 3600 seconds
authenticate_ip_ttl 1 seconds
authenticate_cache_garbage_interval 3600 seconds

acl AUTHENTICATED proxy_auth REQUIRED

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Thanks, i will try in this way

Le 16/09/2021 à 21:03, Alex Rousskov a écrit :

On 9/16/21 2:52 PM, David Touzeau wrote:


It is true that it would be possible to use an external_acl in the
http_reply_access.

Do you think that adding it in this position I would be able to use
squid's resolution results ?

Yes, bugs notwithstanding, an external ACL evaluated at
http_reply_access time should have access to %
Le 16/09/2021 à 19:43, Alex Rousskov a écrit :

On 9/16/21 1:30 PM, David Touzeau wrote:


I'm turning to create a DNS resolution dev and I'm giving up looking
retreive this information through Squid.

Please note that if you do your own DNS resolution, then Squid DNS
resolution results will probably mismatch your results in some cases.
There have been many complaints about associated problems from folks
that went this route...

I am not sure what you are trying to do with that a %
Le 16/09/2021 à 19:13, Amos Jeffries a écrit :

On 17/09/21 2:42 am, David Touzeau wrote:

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform
DNS resolution inside the helper with the performance consequences
that this will impose.


I would be looking at a design where a helper classifies requests and
using that later on when the server is known to match up the IP vs the
classification. I'm struggling to think of a flow that works
efficiently though.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Thanks for the clarification and I agree with you completely.

a multipath and a round-robin DNS method will return different records 
between our DNS calculation and the final squid results


It is true that it would be possible to use an external_acl in the 
http_reply_access.


Do you think that adding it in this position I would be able to use 
squid's resolution results ?



Le 16/09/2021 à 19:43, Alex Rousskov a écrit :

On 9/16/21 1:30 PM, David Touzeau wrote:


I'm turning to create a DNS resolution dev and I'm giving up looking
retreive this information through Squid.

Please note that if you do your own DNS resolution, then Squid DNS
resolution results will probably mismatch your results in some cases.
There have been many complaints about associated problems from folks
that went this route...

I am not sure what you are trying to do with that a %
Le 16/09/2021 à 19:13, Amos Jeffries a écrit :

On 17/09/21 2:42 am, David Touzeau wrote:

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform
DNS resolution inside the helper with the performance consequences
that this will impose.


I would be looking at a design where a helper classifies requests and
using that later on when the server is known to match up the IP vs the
classification. I'm struggling to think of a flow that works
efficiently though.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Amos,

Thank you for your response and kindness,
I'm turning to create a DNS resolution dev and I'm giving up looking 
retreive this information through Squid.


Le 16/09/2021 à 19:13, Amos Jeffries a écrit :

On 17/09/21 2:42 am, David Touzeau wrote:

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform 
DNS resolution inside the helper with the performance consequences 
that this will impose.




I would be looking at a design where a helper classifies requests and 
using that later on when the server is known to match up the IP vs the 
classification. I'm struggling to think of a flow that works 
efficiently though.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform DNS 
resolution inside the helper with the performance consequences that this 
will impose.




Le 16/09/2021 à 16:21, Amos Jeffries a écrit :

On 16/09/21 10:09 pm, David Touzeau wrote:

Hi comunity, Squid fans

I would like to use an external acl process for Geoip processing

i have tried to setup squid to send the remote peer address using %code but it always reply with a "-"


external_acl_type MyGeopip ttl=3600 negative_ttl=3600 
children-startup=2 children-idle=2 children-max=20 concurrency=1 ipv4 
%un %SRC %SRCEUI48 %>ha{X-Forwarded-For} %DST %ssl::>sni 
%USER_CERT_CN %note %

acl MyGeopip_acl external MyGeopip
http_access deny !MyGeopip_acl

I was thinking that Squid call the helper before resolving the remote 
route.




The problem is there is no server/peer connection at all for a 
transaction that has only been received and not yet processed by Squid.



So to force it, i have added a "fake" acl to force Squid to calculate 
the remote address.


acl fake_dst dst 127.0.0.2
http_access deny !fake_dst !MyGeopip_acl

But it failed too, the external_acl still receive the "-" instead of 
the remote public IP address of the server




Aye. There is still no server.

All this dst ACL changed was that Squid knows a group of IPs it 
*might* select from. The decision whether to use one of them (or 
somewhere entirely different) has not yet been made, so there is still 
no server.


The "%when automated retries are done, and is "-" at all points before any 
server contact.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Hi comunity, Squid fans

I would like to use an external acl process for Geoip processing

i have tried to setup squid to send the remote peer address using %code but it always reply with a "-"


external_acl_type MyGeopip ttl=3600 negative_ttl=3600 children-startup=2 
children-idle=2 children-max=20 concurrency=1 ipv4 %un %SRC %SRCEUI48 
%>ha{X-Forwarded-For} %DST %ssl::>sni %USER_CERT_CN %note %/lib/squid3/squid-geoip


acl MyGeopip_acl external MyGeopip
http_access deny !MyGeopip_acl

I was thinking that Squid call the helper before resolving the remote route.

So to force it, i have added a "fake" acl to force Squid to calculate 
the remote address.


acl fake_dst dst 127.0.0.2
http_access deny !fake_dst !MyGeopip_acl

But it failed too, the external_acl still receive the "-" instead of the 
remote public IP address of the server



Where is the mistake ?

Regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1/Debian WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-09-15 Thread David Touzeau

Many thanks

It fix the issue !

Le 15/09/2021 à 13:08, Graham Wharton a écrit :
You see this when starting as non rootuser. Squid should be started as 
root and then it changes identity to cache effective user as defined 
in config when it forks.


Graham Wharton
Lube Finder
Tel (UK) : 0800 955  0922
Tel (Intl) : +44 1305 898033
https://www.lubefinder.com

*From:* squid-users  on 
behalf of David Touzeau 

*Sent:* Wednesday, September 15, 2021 11:40:04 AM
*To:* squid-users@lists.squid-cache.org 

*Subject:* [squid-users] squid 5.1/Debian WARNING: no_suid: setuid(0): 
(1) Operation not permitted

On Debian 10 64bits  with squid 5.1 we have thousand warning as this:

2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted


When squid try to load external acls binaries

add chmod 04755 in binaries  did not resolve the issue.

No issue with same configuration with squid 3.5x branch

Any tips ?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 5.1/Debian WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-09-15 Thread David Touzeau

On Debian 10 64bits  with squid 5.1 we have thousand warning as this:

2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted


When squid try to load external acls binaries

add chmod 04755 in binaries  did not resolve the issue.

No issue with same configuration with squid 3.5x branch

Any tips ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Log to statsd

2021-08-11 Thread David Touzeau

Basically syslogd can do what you want : send via TCP, HTTP, UDP

So the deal is to use

logformat my_metrics      [statsd] %icap::tt %
Hi

Is there a way to configure Squid to output the logs to statsd rather 
than a file?

Today I have this:

+logformat my_metrics  %icap::tt %However I would like to avoid the overhead in parsing the log file by 
using statsd or something similar.


Thanks,
Moti

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.14 : no_suid: setuid(0): (1) Operation not permitted

2021-02-28 Thread David Touzeau

Thanks Alex

This bug is a really "fog"  while i'm using Debian 10.x

https://superuser.com/questions/731104/squid-proxy-cache-server-no-suid-setuid0-1-operation-not-permitted
https://forum.netgate.com/topic/67220/squid3-dev-transparente-con-clamav-64-bit-1a-prueba/2

Your answers since several years:
http://www.squid-cache.org/mail-archive/squid-users/201301/0399.html
https://www.mail-archive.com/search?l=squid-us...@squid-cache.org=subject:"\[squid\-users\]+Warning+in+cache.log"=newest=1

My last discuss on squid 4.13
https://www.spinics.net/lists/squid/msg93659.html


Many users says there is no impact on helpers and performance as it is 
just a warning...


Did you confirm it ?


Le 28/02/2021 à 01:58, Alex Rousskov a écrit :

On 2/27/21 7:22 PM, David Touzeau wrote:


Hi, regulary i have this error :

2021/02/28 01:18:43 kid1| helperOpenServers: Starting 5/32
'security_file_certgen' processes
2021/02/28 01:18:43 kid1| WARNING: no_suid: setuid(0): (1) Operation not
permitted

i have set the setuid permission

chown root:squid security_file_certgen
chmod 04755 security_file_certgen

or
chown squid:squid security_file_certgen
chmod 0755 security_file_certgen

in both cases, squid always claim with "the no_suid: setuid(0): (1)
Operation not permitted"

Sounds like bug 3785: https://bugs.squid-cache.org/show_bug.cgi?id=3785
That bug was filed many years ago and for a different helper/OS, but I
suspect it applies to your situation as well.



How can i fix it ?

Unfortunately, I do not know the answer to that question. If it is
indeed bug 3785, then its current status is reflected by comment #5 at
https://bugs.squid-cache.org/show_bug.cgi?id=3785#c5


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.14 : no_suid: setuid(0): (1) Operation not permitted

2021-02-27 Thread David Touzeau


Hi, regulary i have this error :

2021/02/28 01:18:43 kid1| helperOpenServers: Starting 5/32 
'security_file_certgen' processes
2021/02/28 01:18:43 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted


i have set the setuid permission

chown root:squid security_file_certgen
chmod 04755 security_file_certgen

or
chown squid:squid security_file_certgen
chmod 0755 security_file_certgen

in both cases, squid always claim with "the no_suid: setuid(0): (1) 
Operation not permitted"


How can i fix it ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-01-14 Thread David Touzeau
Yes it seems the same bug but the ticket is not relevant (FreeBSD) as 
i'm on Debian and on a modern kernel


The main incomprehensible behavior that is issue occurs sometimes as 
setuid is a sticky bit permission..


squid -v output:

Squid Cache: Version 4.13
Service Name: squid

This binary uses OpenSSL 1.1.1d  10 Sep 2019. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html


configure options:  '--prefix=/usr' '--build=x86_64-linux-gnu' 
'--includedir=/include' '--mandir=/share/man' '--infodir=/share/info' 
'--localstatedir=/var' '--libexecdir=/lib/squid3' 
'--disable-maintainer-mode' '--disable-dependency-tracking' 
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' 
'--enable-gnuregex' '--enable-removal-policy=heap' 
'--enable-follow-x-forwarded-for' '--enable-removal-policies=lru,heap' 
'--enable-arp-acl' '--enable-truncate' '--with-large-files' 
'--with-pthreads' '--enable-esi' '--enable-storeio=aufs,diskd,ufs,rock' 
'--enable-x-accelerator-vary' '--with-dl' '--enable-linux-netfilter' 
'--with-netfilter-conntrack' '--enable-wccpv2' '--enable-eui' 
'--enable-auth' '--enable-auth-basic' '--enable-snmp' '--enable-icmp' 
'--enable-auth-digest' '--enable-log-daemon-helpers' 
'--enable-url-rewrite-helpers' '--enable-auth-ntlm' 
'--with-default-user=squid' '--enable-icap-client' 
'--disable-cache-digests' '--enable-poll' '--enable-epoll' 
'--enable-async-io=128' '--enable-zph-qos' '--enable-delay-pools' 
'--enable-http-violations' '--enable-url-maps' '--enable-ecap' 
'--enable-ssl' '--with-openssl' '--enable-ssl-crtd' 
'--enable-xmalloc-statistics' '--enable-ident-lookups' 
'--with-filedescriptors=65536' '--with-aufs-threads=128' 
'--disable-arch-native' '--with-logdir=/var/log/squid' 
'--with-pidfile=/var/run/squid/squid.pid' 
'--with-swapdir=/var/cache/squid' 'build_alias=x86_64-linux-gnu'



Le 14/01/2021 à 05:43, Amos Jeffries a écrit :

On 14/01/21 3:17 am, David Touzeau wrote:


Hi

This error is generated every 15 minutes when using any authenticator 
helper (ntlm, kerberos...)


Is there a way to investigate on this issue ?

kidxx| WARNING: no_suid: setuid(0): (1) Operation not permitted



This looks like <https://bugs.squid-cache.org/show_bug.cgi?id=3785>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-01-13 Thread David Touzeau


Hi

This error is generated every 15 minutes when using any authenticator 
helper (ntlm, kerberos...)


Is there a way to investigate on this issue ?

kidxx| WARNING: no_suid: setuid(0): (1) Operation not permitted

Sometimes, after rebooting the system, issue is fixed for an 
undetermined period.


Using squid 4.13 on Debian 10 Intel 64 bits.

regards


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread David Touzeau
Yes this an hton of the IP address (ip2long) , remove the .addr and 
switch to long2ip


Le 04/01/2021 à 14:56, ngtech1...@gmail.com a écrit :


Thanks David,

I don’t understand something:

1490677018.addr

Are these integers representing of ip addresses?

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

*From:*David Touzeau 
*Sent:* Monday, January 4, 2021 3:25 PM
*To:* ngtech1...@gmail.com; squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] PCI Certification compliance lists


Hi Eliezer:

http://articatech.net/tmpf/categories/banking.gz 
<http://articatech.net/tmpf/categories/banking.gz>
http://articatech.net/tmpf/categories/cleaning.gz 
<http://articatech.net/tmpf/categories/cleaning.gz>



Le 04/01/2021 à 10:27, ngtech1...@gmail.com 
<mailto:ngtech1...@gmail.com> a écrit :


Hey David.

Indeed it should be done with the local websites however, These
sites are pretty static.

Would it be OK to publish theses lists online as a file/files?

The main issue is that ssl-bump requires couple “fast” acls.

I believe it should be a “fast” acl but we also need the option to
use an external helper like for many other function.

If I can choose between “fast” as default and the ability to run a
“slow” external acl helper I can
choose what is right for/in my environment.

Currently I cannot program a helper that will decide if a CONNECT
connection should be spliced or bumped programmatically.

It forces me to reload this list manually which might take couple
seconds.

Thanks,

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

*From:*squid-users 
<mailto:squid-users-boun...@lists.squid-cache.org> *On Behalf Of
*David Touzeau
*Sent:* Monday, January 4, 2021 10:23 AM
*To:* squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
*Subject:* Re: [squid-users] PCI Certification compliance lists

Hi Eiezer,

I can help you by giving a list but

Just by using "main domains":

 1. Banking/transcations : 27 646 websites.
 2. AV sofwtare and updates sites (fw, routers...) :  133 295 websites


I can give it to you the lists , they are incomplete and it should
decrease squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own
list according it's country or company activity.




Le 03/01/2021 à 15:12, ngtech1...@gmail.com
<mailto:ngtech1...@gmail.com> a écrit :

I am looking for domains lists that can be used for squid to be PCI

Certified.

  


I have read this article:

https://www.imperva.com/learn/data-security/pci-dss-certification/  
<https://www.imperva.com/learn/data-security/pci-dss-certification/>

  


And couple others to try and understand what might a Squid proxy 
ssl-bump

exception rules should contain.

So technically we need:

- Banks

- Health care

- Credit Cards(Visa, Mastercard, others)

- Payments sites

- Antivirus(updates and portals)

- OS and software Updates signatures(ASC, MD5, SHAx etc..)

  


*https://support.kaspersky.com/common/start/6105  
<https://support.kaspersky.com/common/start/6105>

*

https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e  
<https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e>

set-product-with-a-third-party-firewall

*

https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s  
<https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s>


55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc


p=TS100291&_afrLoop=641093247174514=0%25=fals


e=false=0%25=100%25#!%40%40%3FshowFooter%3


Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2


525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3

D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9

  

  


If someone has the documents which instructs what domains to not 
inspect it

would also help a lot.

  


Thanks,

Eliezer

  




Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:ngtech1...@gmail.com  <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

  

  

  


___

squid-users mailing

Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread David Touzeau


Hi Eliezer:

http://articatech.net/tmpf/categories/banking.gz
http://articatech.net/tmpf/categories/cleaning.gz



Le 04/01/2021 à 10:27, ngtech1...@gmail.com a écrit :


Hey David.

Indeed it should be done with the local websites however, These sites 
are pretty static.


Would it be OK to publish theses lists online as a file/files?

The main issue is that ssl-bump requires couple “fast” acls.

I believe it should be a “fast” acl but we also need the option to use 
an external helper like for many other function.


If I can choose between “fast” as default and the ability to run a 
“slow” external acl helper I can

choose what is right for/in my environment.

Currently I cannot program a helper that will decide if a CONNECT 
connection should be spliced or bumped programmatically.


It forces me to reload this list manually which might take couple seconds.

Thanks,

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Monday, January 4, 2021 10:23 AM
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] PCI Certification compliance lists

Hi Eiezer,

I can help you by giving a list but

Just by using "main domains":

  * Banking/transcations : 27 646 websites.
  * AV sofwtare and updates sites (fw, routers...) :  133 295 websites


I can give it to you the lists , they are incomplete and it should 
decrease squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own list 
according it's country or company activity.




Le 03/01/2021 à 15:12, ngtech1...@gmail.com 
<mailto:ngtech1...@gmail.com> a écrit :


I am looking for domains lists that can be used for squid to be PCI

Certified.

I have read this article:

https://www.imperva.com/learn/data-security/pci-dss-certification/  
<https://www.imperva.com/learn/data-security/pci-dss-certification/>

And couple others to try and understand what might a Squid proxy ssl-bump

exception rules should contain.

So technically we need:

- Banks

- Health care

- Credit Cards(Visa, Mastercard, others)

- Payments sites

- Antivirus(updates and portals)

- OS and software Updates signatures(ASC, MD5, SHAx etc..)

*https://support.kaspersky.com/common/start/6105  
<https://support.kaspersky.com/common/start/6105>

*

https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e  
<https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e>

set-product-with-a-third-party-firewall

*

https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s  
<https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s>

55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc

p=TS100291&_afrLoop=641093247174514=0%25=fals

e=false=0%25=100%25#!%40%40%3FshowFooter%3

Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2

525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3

D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9

If someone has the documents which instructs what domains to not inspect it

would also help a lot.

Thanks,

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:ngtech1...@gmail.com  <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

___

squid-users mailing list

squid-users@lists.squid-cache.org  
<mailto:squid-users@lists.squid-cache.org>

http://lists.squid-cache.org/listinfo/squid-users  
<http://lists.squid-cache.org/listinfo/squid-users>



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread David Touzeau

Hi Eiezer,

I can help you by giving a list but

Just by using "main domains":

 * Banking/transcations : 27 646 websites.
 * AV sofwtare and updates sites (fw, routers...) : 133 295 websites


I can give it to you the lists , they are incomplete and it should 
decrease squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own list 
according it's country or company activity.





Le 03/01/2021 à 15:12, ngtech1...@gmail.com a écrit :

I am looking for domains lists that can be used for squid to be PCI
Certified.

I have read this article:
https://www.imperva.com/learn/data-security/pci-dss-certification/

And couple others to try and understand what might a Squid proxy ssl-bump
exception rules should contain.
So technically we need:
- Banks
- Health care
- Credit Cards(Visa, Mastercard, others)
- Payments sites
- Antivirus(updates and portals)
- OS and software Updates signatures(ASC, MD5, SHAx etc..)

* https://support.kaspersky.com/common/start/6105
*
https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e
set-product-with-a-third-party-firewall
*
https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s
55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc
p=TS100291&_afrLoop=641093247174514=0%25=fals
e=false=0%25=100%25#!%40%40%3FshowFooter%3
Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2
525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3
D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9


If someone has the documents which instructs what domains to not inspect it
would also help a lot.

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4/5 feature request send login informations to peers

2020-11-19 Thread David Touzeau


Thanks Amos

You means using "login=PASS" in peer settings and in Proxy parent B and 
C use the "basic_fake_auth" helper to "simulate" the requested auth ?




Le 17/11/2020 à 11:43, Amos Jeffries a écrit :

On 17/11/20 9:27 pm, David Touzeau wrote:


Hi,

We a first Squid using Kerberos + Active Directory authentication.
This first squid is used to limit access using ACls and Active 
Directory groups.


This first squid using parents as peer in order to access to internet 
in this way:


  | > SQUID B --> Internet 1
squid A ->
  | -> SQUID C -> Internet 2

1) We want using ACLs too ( for delegation purpose ) on Squid B and C
2) For legal logs purpose compliance.

In this case,  the username discovered in SQUIDA must be transmitted 
to SQUID B AND C and SQUID B-C must accept the information in order 
to use as login information to parse acls


Is it possible ?


You can send the username. But the security token is tied to the 
client<->SquidA TCP connection - it cannot be validated by other 
servers than SquidA.


This should not matter though. Since Squid A is only permitting 
authenticated traffic you can *authorize* at Squid B and C based only 
on the source being one of your Squid with valid username.





If not: wee have seen that the Proxy protocol accept to transmit the 
source IP/login information to peers that are compliance with proxy 
protocol.

but the peers method in squid did not allow to use Proxy protocol.
Is it possible to add the "Proxy Protocol" support in peers method ?



It is possible to implement (for Squid-6 earliest) PROXYv2 for 
cache_peer. But the credentials security token remains tied to SquidA 
service.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4/5 feature request send login informations to peers

2020-11-17 Thread David Touzeau


Hi,

We a first Squid using Kerberos + Active Directory authentication.
This first squid is used to limit access using ACls and Active Directory 
groups.


This first squid using parents as peer in order to access to internet in 
this way:


 | > SQUID B --> Internet 1
squid A ->
 | -> SQUID C -> Internet 2

1) We want using ACLs too ( for delegation purpose ) on Squid B and C
2) For legal logs purpose compliance.

In this case,  the username discovered in SQUIDA must be transmitted to 
SQUID B AND C and SQUID B-C must accept the information in order to use 
as login information to parse acls


Is it possible ?

If not: wee have seen that the Proxy protocol accept to transmit the 
source IP/login information to peers that are compliance with proxy 
protocol.

but the peers method in squid did not allow to use Proxy protocol.
Is it possible to add the "Proxy Protocol" support in peers method ?






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid4/5: Feature request identify access rules.

2020-11-07 Thread David Touzeau

When having several *_access http_access,reply_access...
In a stressed environment, it is difficult to hunt an issue or a wrong rule.

The debug mode is impossible because the proxy in production mode write too 
many logs..

But if we can identify the rule and add pointer to the log, it is possible to 
see a wrong rule or to see that a request is correctly passed trough.

Currently we have to do

acl acl1 src 1.2.3.4
http_access deny acl1



We suggest using the same token used in http_port:

acl acl1 src 1.2.3.4
http_access deny acl1 rulename=Rule.access1

And add a token for template eg %RULENAME and a token for logformat %rname that 
helps to identify the token.


Added in bugtrack

https://bugs.squid-cache.org/show_bug.cgi?id=5087


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.10: ssl-bump on https_port requires tproxy/intercept which is missing in secure proxy method

2020-05-20 Thread David Touzeau

Thanks for the answer details

How to be a sponsor ? ( cost ) of such feature
Could you think it can be planned for 5.x ?
I think it should be a "future" "standard" in the same way of DNS over SSL

Le 19/05/2020 à 16:46, Alex Rousskov a écrit :

On 18/05/20 10:15 am, David Touzeau wrote:

Hi we want to use squid as * * * Secure Proxy * * * using https_port
We have tested major browsers and it seems working good.

To make it work, we need to deploy the proxy certificate on all browsers
to make the secure connection running.

I hope that deployment is not necessary -- an HTTPS proxy should be
using a certificate issued for its domain name and signed by a
well-known CA already trusted by browsers. An HTTPS proxy is not faking
anything. If browsers do require CA certificate import in this
environment, it is their limitation.


On 5/19/20 9:24 AM, Matus UHLAR - fantomas wrote:

David, note that requiring browsers to connect to your proxy over encrypted
(https) connection, and then decrypting tunnels to real server will lower
the clients' security

A proper SslBump implementation for HTTPS proxy will not be "decrypting
tunnels to real server". The security of such an implementation will be
the same as of SslBump supported today (plus the additional protections
offered by securing the browser-proxy communication).

Cheers,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x acl server_cert_fingerprint for bump no matches

2020-05-19 Thread David Touzeau


Thanks alex, made this one on squid 4.10


acl TestFinger server_cert_fingerprint 
77:F6:8D:C1:0A:DF:94:8B:43:1F:8E:0E:91:5E:0C:32:42:8B:99:C9

acl ssl_step1 at_step SslBump1
acl ssl_step2 at_step SslBump2
acl ssl_step3 at_step SslBump3
ssl_bump peek ssl_step2
ssl_bump splice ssl_step3 TestFinger
ssl_bump stare ssl_step2 all
ssl_bump bump all

But no luck, website still decrypted.




Le 13/05/2020 à 21:33, Alex Rousskov a écrit :

On 5/12/20 7:42 AM, David Touzeau wrote:

ssl_bump peek ssl_step1
ssl_bump splice TestFinger
ssl_bump stare ssl_step2 all
ssl_bump bump all
Seems TestFinger Acls did not matches in any case

You are trying to use step3 information (i.e., the server certificate)
during SslBump step2: The "splice TestFinger" line is tested during
step2 and mismatches because the server certificate is still unknown
during that step. That mismatch results in Squid staring during step2.
The "splice TestFinger" line is not tested during step3 because splicing
is not possible after staring. Thus, Squid reaches "bump all" and bumps.

For a detailed description of what happens (and what information is
available) during each SslBump step, please see
https://wiki.squid-cache.org/Features/SslPeekAndSplice

Also, if you are running v4.9 or earlier, please upgrade. We fixed one
server_cert_fingerprint bug, and that fix became a part of the v4.10
release (commit e0eca4c).


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4.10: ssl-bump on https_port requires tproxy/intercept which is missing in secure proxy method

2020-05-19 Thread David Touzeau



Hi we want to use squid as * * * Secure Proxy * * * using https_port
We have tested major browsers and it seems working good.

To make it work, we need to deploy the proxy certificate on all browsers 
to make the secure connection running.


In this case, squid forward requests without decrypting them.because 
ssl-bump is not added.


But Adding the ssl-bump in https_port is not permitted :

"sl-bump on https_port requires tproxy/intercept which is missing"

why bumping is not allowed ?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.x acl server_cert_fingerprint for bump no matches

2020-05-12 Thread David Touzeau


Hi, i'm trying to play with acl "server_cert_fingerprint" for splicing 
websites.


First, get the fingerprint :

openssl s_client -host www.clubic.com -port 443 2> /dev/null | openssl 
x509 -fingerprint -noout



# Build the acl

acl TestFinger server_cert_fingerprint 
77:F6:8D:C1:0A:DF:94:8B:43:1F:8E:0E:91:5E:0C:32:42:8B:99:C9



# I want squid to not bump this fingerprint.

acl ssl_step1 at_step SslBump1
acl ssl_step2 at_step SslBump2
acl ssl_step3 at_step SslBump3
ssl_bump peek ssl_step1
ssl_bump splice TestFinger
ssl_bump stare ssl_step2 all
ssl_bump bump all

But browsing on the website still receive squid certificate and not the 
original one.

Seems TestFinger Acls did not matches in any case

Did i'm wrong somewhere ?


Regards.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP Fast open and squid4

2020-02-21 Thread David Touzeau

Hi

Is Squid handle TCP Fast open on modern kernel ?

Has anyone tried to implement this directive and noticed a performance 
improvement ?


Best regards.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid v4: logformat log the last denied ACL object

2019-04-18 Thread David Touzeau


Le 15/04/2019 à 22:41, Alex Rousskov a écrit :

On 4/15/19 8:01 AM, David Touzeau wrote:


Is it possible, sometimes to better understand a bunch of ACLs to log
the last matches or a set of matched acls objects:
192.168.1.235 - - [15/Apr/2019:15:59:30 +0200] "GET
http://www.msftncsi.com/ncsi.txt HTTP/1.1" 200 211 "-" "curl/7.52.1"
TCP_MISS:HIER_DIRECT text/plain objects1,objects2

Yes, it is possible to do something like that in modern Squids, but
covering all ACLs in a non-trivial squid.conf would require tedious
manual work or automation. Here is a rough untested recipe:

1. For each named ACL x that you want to access-log, create a wrapper
annotation ACL called matchAndLogX:

acl x ...
acl annotateAfterX annotate_transaction matchedAcls+=x
acl matchAndLogX all-of x annotateAfterX


2. For each named ACL x wrapped in step 1, replace all its uses in old
squid.conf directives with the matchAndLogX ACLs defined in step 1. For
example:

http_access deny x y

becomes

http_access deny matchAndLogX matchAndLogY


3. Add matchedAcls annotation to your logformat definition to log
annotations accumulated by the wrapper ACLs defined in step 1:

logformat myAccessRecord ...  %note{matchedAcls}
access_log ... logformat=myAccessRecord ...


Depending on your actual configuration, you may be able to reduce the
amount of logging/wrapping if you annotate groups of matching ACLs
rather than each individual ACL. For example:

 acl annotateAfterX annotate_transaction matchedAcls+=(x,y)
 http_access deny x y annotateAfterXandY


Needless to say, adding such annotations manually to a non-trivial
configuration is a lot of error-prone work! Automating wrapping,
monitoring cache.log with elevated debugging levels (see debug_options),
or hacking Squid to log the info you need is a better approach in many
(most?) cases.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Thanks !!!

Will try both options



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-04-16 Thread David Touzeau


Le 02/04/2019 à 10:39, Amos Jeffries a écrit :

On 2/04/19 8:53 pm, L.P.H. van Belle wrote:

I suggest start compairing the logs you posted, the builds are really different.

Differences in
- kernel
- needed packages
- build paramaters due to missing or different packages.
Etc.

Just diff you logs and you will see it.


The biggest there is C++11 support being enabled on CentOS. That alone
enables quite a few performance optimizations in the stdlib template code.

Amos
___


Hi,


We have tested squid in Debian 10 and performance are now the same as 
CentOS 7


So Debian 10 should be the best choice but it is not released yet...



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid v4: logformat log the last denied ACL object

2019-04-15 Thread David Touzeau

Hi

Is it possible, sometimes to better understand a bunch of ACLs to log 
the last matches or a set of matched acls objects:



example


192.168.1.235 - - [15/Apr/2019:15:59:30 +0200] "GET 
http://www.msftncsi.com/ncsi.txt HTTP/1.1" 200 211 "-" "curl/7.52.1" 
TCP_MISS:HIER_DIRECT text/plain objects1,objects2


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-04-02 Thread David Touzeau


Le 02/04/2019 à 18:06, Alex Rousskov a écrit :

On 4/2/19 1:23 AM, David Touzeau wrote:

Le 01/04/2019 à 23:22, Alex Rousskov a écrit :

Do your Squids use shared memory for the memory cache? See
memory_cache_shared (even if you do not set it explicitly).
http://www.squid-cache.org/Doc/config/memory_cache_shared/

The test did not use workers

That does not answer my question. Do you use Rock cache_dir(s)?



Any significant difference in mgr:info and mgr:counters output after a
test that only has memory hits?

The question still stands. I would recommend testing this with a single
URL and a fixed/same number of requests submitted by a reliable proxy
benchmarking tool or at least a wget/curl script.



Do you know why CentOS objects are 34 bytes smaller than Debian ?

Something in your test setup or environment results responses (or
response delivery statistics) that differ in size between the tests. I
do not know what it is, and the number of possible options is too large
to guess correctly: It could be anything from 32-vs-64 bit OSes, to
locale differences, to Squid host name, to Cookies, to test setup
imperfections, to Squid statistics collection bugs, etc., etc.

Have you compared the responses received by the client (headers and
all)? Do they differ by 34 bytes? I suggest testing with a single URL
that produces different results and then digging down to identify the
difference (starting with comparing responses).

Alex.


Thanks Alex for these ways to investigate.

We will try to get more precise for the tests

We have reduced the squid.conf to the minimal way in order to be sure 
that nothing can disturb testing.


Only one cache, no rock, no workers, no tuning

Amos says that perhaps the C++ version can make some tweaks in this case 
we will start to do the same tests with Ubuntu that uses the most recent 
kernels and C++


But for the moment

- without intelligence
- Without investigations efforts
- Just by compiling squid

To resume

Centos 7 is 10 times faster than Debian 7
Centos 7 is 400%  faster than Debian 9

Debian 9 is a  little faster than Debian 7

We will keep updated for Unbuntu.





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-04-02 Thread David Touzeau


Le 02/04/2019 à 07:43, L A Walsh a écrit :

On 4/1/2019 2:17 AM, David Touzeau wrote:

We have recompiled same squid version on 2 systems
https://github.com/dtouzeau/1.6.x/blob/Tempfiles/centos7-config.log?raw=true

---
Result was CentOS 44% faster on TCP_MEM_HITS
---
   

What kernels are the two systems running?

Are the config options exactly the same?

Just a WAG, but but are the settings for
CONFIG_TRANSPARENT_HUGEPAGE the same for both?


Yes it the same : always [madvise] never





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-04-02 Thread David Touzeau


Le 01/04/2019 à 23:22, Alex Rousskov a écrit :

On 4/1/19 3:17 AM, David Touzeau wrote:


On 30.03.19 10:22, David Touzeau wrote:

* Debian 9 net install + Squid compiled
* CentOS 7 minimal  + Squid compiled

Same version, same compilation parameters, same Squid settings.
It seems that Squid on CentOS is 10 times faster than squid on Debian



We have recompiled same squid version on 2 systems

No march= using --disable-arch-native on both systems

Debian config.log
https://github.com/dtouzeau/1.6.x/blob/Tempfiles/debian9-config.log?raw=true

Centos config.log
https://github.com/dtouzeau/1.6.x/blob/Tempfiles/centos7-config.log?raw=true

Result was CentOS 44% faster on TCP_MEM_HITS

Just to clarify: Did changing ./configure options alone move you from
1000% to 44%? Or was the earlier "10 times" just a crude approximation
that we should ignore now?


Do your Squids use shared memory for the memory cache? See
memory_cache_shared (even if you do not set it explicitly).
http://www.squid-cache.org/Doc/config/memory_cache_shared/

Any significant difference in mgr:info and mgr:counters output after a
test that only has memory hits?

Alex.


Hi Alex and comunity

The test did not use workers

Here it is a piece of logs between the 2 machines

CentOS 7:
1554185117.132  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
10979 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0611_2.jpg 
- HIER_NONE/- image/jpeg
1554185117.133  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
5531 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0611_5.jpg 
- HIER_NONE/- image/jpeg
1554185117.134  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
3727 GET 
http://www.projetmontsaintmichel.com//upload/document/minis/capture_40.jpg 
- HIER_NONE/- image/jpeg
1554185117.137  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
1230 GET http://www.projetmontsaintmichel.com/web/images/ico_pdf.png - 
HIER_NONE/- image/png
1554185117.141  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
33600 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0609_6.gif 
- HIER_NONE/- image/gif
1554185117.142  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
20200 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0609_2.gif 
- HIER_NONE/- image/gif
1554185117.144  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
29375 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0609_5.gif 
- HIER_NONE/- image/gif
1554185117.146  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
29835 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0609_4.gif 
- HIER_NONE/- image/gif
1554185117.147  2 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
28683 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0609_1.gif 
- HIER_NONE/- image/gif
1554185117.149  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
7715 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0608_3.jpg 
- HIER_NONE/- image/jpeg
1554185117.151  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
8175 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0608_2.jpg 
- HIER_NONE/- image/jpeg
1554185117.152  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
2519 GET 
http://www.projetmontsaintmichel.com/web/images/bloc_infoschantier2.gif 
- HIER_NONE/- image/gif
1554185117.153  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
3870 GET 
http://www.projetmontsaintmichel.com/web/images/bloc_espacepro2.gif - 
HIER_NONE/- image/gif
1554185117.157  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
9349 GET 
http://www.projetmontsaintmichel.com/upload/document/reduites/TR_BA_0608_1.jpg 
- HIER_NONE/- image/jpeg
1554185117.162  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
3622 GET 
http://www.projetmontsaintmichel.com//upload/document/minis/capture_29.jpg 
- HIER_NONE/- image/jpeg
1554185117.162  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 409 
GET 
http://www.projetmontsaintmichel.com/web/images/puce_carre_visite.gif - 
HIER_NONE/- image/gif
1554185117.162  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 409 
GET http://www.projetmontsaintmichel.com/web/images/puce_carre_gris.gif 
- HIER_NONE/- image/gif
1554185117.175  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
23219 GET 
http://www.projetmontsaintmichel.com/web/images/fond_footer.jpg - 
HIER_NONE/- image/jpeg
1554185117.187  0 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 540 
GET http://www.projetmontsaintmichel.com/web/galerie/images/overlay.png 
- HIER_NONE/- image/png
1554185117.389  2 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 858 
GET http://www.projetmontsaintmichel.com/favicon.ico - HIER_NONE/- 
image/x-icon


Debian 9:
1554185129.651  1 172.16.1.228 50:46:5d:a0:3e:5a TCP_MEM_HIT/200 
8887 GET 
http://www.projetmontsaintmichel.com/upload/documen

Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-04-01 Thread David Touzeau


Le 01/04/2019 à 00:23, David Touzeau a écrit :


Le 31/03/2019 à 05:50, Amos Jeffries a écrit :

On 31/03/19 3:41 am, David Touzeau wrote:

On 30.03.19 10:22, David Touzeau wrote:


Did you have perform squid stress on Debian against CentOS ?

I have installed:

* Debian 9 net install + Squid compiled
* CentOS 7 minimal  + Squid compiled

Same version, same compilation parameters, same Squid settings.
It seems that Squid on CentOS is 10 times faster than squid on Debian

faster in what? Response time? number of parallel connections?
single or multiple connection data transfers?
HTTP or HTTPS?


What are kernel differences that made this huge performance changes?

no kernel differences should cause 10x speed difference.


If you still have the config.log files from the build you may be able to
track down something being detected (or not) in one of the builds.

The -march=native or -O level options for compile would be the first
place I look for a major difference like that. Either on Squid or on one
of the system libraries it uses. The *FLAGS summary at the end of the
build can be a good starting point for comparison.

Compiler version can also have an effect as newer compilers use more
performance related tricks than older ones (YMMV on which tricks are
actually better).



Faster in what? Response time?

1. response time, MISS and HIT are faster

Example:

on Centos MEM_HIT are about  0-1 msec against Debian about 3-4 msec


On the same test traffic?


Amos


Thanks Amos, we will take care during the compilation.

But to be sure of tests:

Using same settings, same cache, same hardware and same destination 
websites.



Hi Amos and Community...

We have recompiled same squid version on 2 systems

No march= using --disable-arch-native on both systems

Debian config.log

https://github.com/dtouzeau/1.6.x/blob/Tempfiles/debian9-config.log?raw=true

Centos config.log

https://github.com/dtouzeau/1.6.x/blob/Tempfiles/centos7-config.log?raw=true

---
Result was CentOS 44% faster on TCP_MEM_HITS
---



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-03-31 Thread David Touzeau


Le 31/03/2019 à 05:50, Amos Jeffries a écrit :

On 31/03/19 3:41 am, David Touzeau wrote:

On 30.03.19 10:22, David Touzeau wrote:


Did you have perform squid stress on Debian against CentOS ?

I have installed:

* Debian 9 net install + Squid compiled
* CentOS 7 minimal  + Squid compiled

Same version, same compilation parameters, same Squid settings.
It seems that Squid on CentOS is 10 times faster than squid on Debian

faster in what? Response time? number of parallel connections?
single or multiple connection data transfers?
HTTP or HTTPS?


What are kernel differences that made this huge performance changes?

no kernel differences should cause 10x speed difference.


If you still have the config.log files from the build you may be able to
track down something being detected (or not) in one of the builds.

The -march=native or -O level options for compile would be the first
place I look for a major difference like that. Either on Squid or on one
of the system libraries it uses. The *FLAGS summary at the end of the
build can be a good starting point for comparison.

Compiler version can also have an effect as newer compilers use more
performance related tricks than older ones (YMMV on which tricks are
actually better).



Faster in what? Response time?

1. response time, MISS and HIT are faster

Example:

on Centos MEM_HIT are about  0-1 msec against Debian about 3-4 msec


On the same test traffic?


Amos


Thanks Amos, we will take care during the compilation.

But to be sure of tests:

Using same settings, same cache, same hardware and same destination 
websites.





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Why Squid on CentOS is faster than Debian ?

2019-03-30 Thread David Touzeau

On 30.03.19 10:22, David Touzeau wrote:


Did you have perform squid stress on Debian against CentOS ?

I have installed:

* Debian 9 net install + Squid compiled
* CentOS 7 minimal  + Squid compiled

Same version, same compilation parameters, same Squid settings.



It seems that Squid on CentOS is 10 times faster than squid on Debian


faster in what? Response time? number of parallel connections?
single or multiple connection data transfers?
HTTP or HTTPS?


What are kernel differences that made this huge performance changes?


no kernel differences should cause 10x speed difference.



Faster in what? Response time?

1. response time, MISS and HIT are faster

Example:

on Centos MEM_HIT are about  0-1 msec against Debian about 3-4 msec

Number of parallel connections? single or multiple connection data 
transfers?


2. multiple

HTTP or HTTPS?

3. HTTP

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Why Squid on CentOS is faster than Debian ?

2019-03-30 Thread David Touzeau

Hi all,

Did you have perform squid stress on Debian against CentOS ?

I have installed:

 * Debian 9 net install + Squid compiled
 * CentOS 7 minimal  + Squid compiled

Same version, same compilation parameters, same Squid settings.

It seems that Squid on CentOS is 10 times faster than squid on Debian

What are kernel differences that made this huge performance changes?


Best regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.x: decided: do not cache but share because the entry has been released

2019-02-24 Thread David Touzeau
Many thanks for the explanation 

There is a miss configuration in config file:

"cache deny all"

It's a shame...

-Message d'origine-
De : squid-users  De la part de Alex 
Rousskov
Envoyé : samedi 23 février 2019 23:16
À : squid-users@lists.squid-cache.org
Objet : Re: [squid-users] squid 4.x: decided: do not cache but share because 
the entry has been released

On 2/23/19 10:17 AM, Amos Jeffries wrote:
> On 24/02/19 5:33 am, David Touzeau wrote:
>> http.cc(982) haveParsedReplyHeaders: decided: do not cache but share 
>> because the entry has been released; HTTP status 200

>> What “but share because the entry has been released” event means ?

> 'do not cache but share' means the reply may still be shared with 
> other concurrent clients (eg. collapsed forwarding), but not to bother 
> trying to cache it.

Correct. To participate in that sharing, those concurrent clients must already 
have a lock on this entry. In other words, "concurrency" here is determined by 
having guaranteed access to the Store entry rather than just overlapping 
transaction lifetimes.


> 'entry has been released' means something else already caused the disk 
> copy in cache to be removed or replaced.

Yes, and this is not limited to the old entries in the disk cache. Entry 
"release" may happen even before the entry is earmarked for any of the caches, 
and the release affects both disk and memory caches.


If you want to figure out why this response is not being cached, you may need 
to figure out why the corresponding Store entry was marked for release. Look 
for releaseRequest lines in the debugging cache.log that match the same entry 
and try to determine why releaseRequest was called.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4.x: decided: do not cache but share because the entry has been released

2019-02-23 Thread David Touzeau
Hi

 

I'm trying to store in cache an Internet file

 

Run the squid in debug mode says: 

 

http.cc(982) haveParsedReplyHeaders: decided: do not cache but share because
the entry has been released; HTTP status 200

 

What "but share because the entry has been released" event means ?

 

 

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: cache_peer PROXY_PROTOCOL support with squid parents

2019-02-23 Thread David Touzeau

Currently we are working on Kerberos with Active Directory with Ha-proxy 
that
sends requests to squid using proxy_protocol.
Everything works great but we want to replace the ha-proxy with a squid.
In fact, we want to the squid client send the credentials information to a
squid parent in order to centralize ACLs on the parent proxy according to 
the user's login name.
If you have any suggestion ?

Best regards




-Message d'origine-
De : squid-users  De la part de
Amos Jeffries
Envoyé : samedi 23 février 2019 04:07
À : squid-users@lists.squid-cache.org
Objet : Re: [squid-users] Squid 4.x: cache_peer PROXY_PROTOCOL support with
squid parents

On 23/02/19 2:45 am, David Touzeau wrote:
> Hi,
>
>
>
> We would like to use this infrastructure:
>
>
>
> Squid-cache client authentication 1
>
>
>| > Squid Parent with ACLs per user/LDAP groups/Web filtering
> ---> INTERNET
>
> Squid-cache client authentication 2 
>
>
>
>
>
> Currently this kind of infrastructure cannot be done because the Squid
> that acts as a client did not send credentials information to the
> parent proxy.
>

There are many types of "client authentication" that can exist in multiple
nested protocol layers:

* HTTP WWW-Auth* credentials

* HTTP Proxy-Auth* credentials

* TLS client X.509 certificate

* CONNECT tunnel Proxy-Auth*

* TCP connection-auth scheme credentials (NTLM, Negotiate)

* IPSEC key exchange

* EUI

* IDENT user name

Which one(s) are you talking about?


>
> We think it should be done if the cache_peer is compliance with
> PROXY_PROTOCOL rfc as the http_port is already compliance.
>

What are you thinking PROXY would be doing to help with the situation?

Keep in mind that the PROXY header needs to be sent before any other bytes
on the server connection. Which immediately limits the cases where any type
of client information is available.


>
> Do you have plans to add PROXY_PROTOCOL inside cache_peer feature ?
>
>

To whom are you addressing this question?


Cheers,
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Transparent vs Tproxy: performance ?

2018-09-02 Thread David Touzeau
Thanks Amos, 

Yes my question was " if NAT is faster or slower than packet flow" and yes you 
are right. 

"Squid is not impacted to this question, this make sense."


I had a "feeling" (human sensation) - with 3.000 users that NAT was faster than 
Tproxy...

But you confirm that this is not relevant...

Best regards,


-Message d'origine-
De : squid-users  De la part de Amos 
Jeffries
Envoyé : samedi 1 septembre 2018 17:07
À : squid-users@lists.squid-cache.org
Objet : Re: [squid-users] Transparent vs Tproxy: performance ?

On 1/09/18 9:33 PM, David Touzeau wrote:
> Hi
> 
> We have 2 ways to make the squid in « transparent mode. »
> 
> The standard Transparent method and (with modern kernels)  the use of 
> « Tproxy » method
> 

Please clarify what this "standard transparent" thing is you referring to?

I suspect that you actually mean "NAT" which is completely separate from Squid 
and thus has no bearing on proxy performance.



> I would like to know which is the best according to the performance ?
> 

This is a meaningless question. "comparing apples to oranges", etc.

You might as well ask if NAT is faster or slower than packet flow?


Both NAT and TPROXY involve kernel managing tables of active connections and 
syscalls by Squid to search those tables on every accept(). Only the timing of 
those syscalls and the state listed in the tables differ. The limitations each 
imposes are more relevant than performance differences.

Specifically;

* TPROXY restricts the TCP ports available to clients to 31K, where normally 
they are 63K.

* NAT systems restrict ports to (63*M)/N where N is number of clients on the 
network, and M the number of IPs available to Squid outbound (usually 1).

As you can see those will impose a cap on both performance and capability of 
your network. How much is determined by your network size and traffic peak 
flows. Not by anything related to Squid.


Squid performance should be essentially the same for all traffic "modes". It is 
driven by the HTTP features used in the messages happening, combined with what 
types of processing your config requires to be done on those messages.
So by crafting the very extreme types of message one can flood a Gbps network 
with a single HTTP request, or pass thousands of transactions quickly over a 
56Kbps modem link.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Transparent vs Tproxy: performance ?

2018-09-01 Thread David Touzeau
Hi 

 

We have 2 ways to make the squid in < transparent mode. > 

 

The standard Transparent method and (with modern kernels)  the use of <
Tproxy > method

 

I would like to know which is the best according to the performance ?

 

Or is it the same ?

 

Best regards.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v4.2 url_rewrite Uri.cc line 371 bad URL parsing on SSL

2018-08-16 Thread David Touzeau
Thanks Amos for details.

Working like a charm now.

Instead of sending https://192.168.1.122:443/myguard.php?rule-id=0; 

Helper sends 192.168.1.122:443 


" url_rewrite_access deny CONNECT" is not a solution because, everything using 
SSL today ( thanks to Google that wants to encrypt all the Net and make 
proxies/Firewall/ICAP unusable )  and many Porn/Malwares/Hacking/Hacked 
websites using SSL.




-Message d'origine-
De : squid-users  De la part de Amos 
Jeffries
Envoyé : jeudi 16 août 2018 03:51
À : squid-users@lists.squid-cache.org
Objet : Re: [squid-users] v4.2 url_rewrite Uri.cc line 371 bad URL parsing on 
SSL

On 16/08/18 11:58, David Touzeau wrote:
> Hi,
> 
>  
> 
> I have written my own url_rewrite helper
> 
>  
> 
> On SSL sites, the helper answering a redirect to a remote denied php  page.
> 

No your helper *rewrite* the URL without changing any other properties of the 
request message. This can be seen clearly in the use of "rewrite-url=" instead 
of "url=".

The difference is important when it comes to the type of message being 
processed.

> 
> With HTTP, no issue but on SSL there is a different behavior
> 
> My helper return
> 
> rewrite-url= https://192.168.1.122:443/myguard.php?rule-id=0;
> 
> but according to debug, the Uri.cc understand : host='https', 
> port='443', path=''
> 
> In this case, squid try to connect to an https machine name and return 
> bad 503
> 
>  
...
> 
> Did i miss something ???
> 

Look at the input received by the helper. HTTPS uses CONNECT requests.
Those messages have authority-form URI not URLs. The above behaviour is what 
happens when your helpers response is interpreted according to authority-form 
syntax.

<https://tools.ietf.org/html/rfc7230#section-5.3.3>


You can prevent the SSL-Bump CONNECT messages being sent to the re-writer with:
  url_rewrite_access deny CONNECT

OR,
 you can try to do a proper redirect by having the helper send:
  OK status=302 url=...


The latter *might* work. Depending on whether the client handles redirection on 
CONNECT requests. Browsers don't support anything other than 200 status. Other 
clients have a mix of behaviours so its somewhat unreliable.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   >