Re: [squid-users] Help with squid Proxy

2023-07-12 Thread Antony Stone
On Wednesday 12 July 2023 at 18:11:08, Andrés Leandro Regalado wrote:

> I implemented squid proxy in a small office to filter the internet and now it
> blocks the communication of the mail client with the mail server, I need to
> know how I can allow outlook or thunderbird to work through squid.

I'm strongly tempted simply to say that you need to change your Squid or 
router configuration in order to fix this problem.

For further details, please give us further details of what you have configured 
so far, otherwise we're just guessing in the dark.

You tell us enough about your setup that we would be able to reproduce it on 
our own networks, and we might be able to suggest to you what needs changing.


Don't forget to include details of how Outlook and Thunderbird are connecting 
(or at least trying to) to your mail server.


Antony.

-- 
Programming is a Dark Art, and it will always be. The programmer is
fighting against the two most destructive forces in the universe:
entropy and human stupidity. They're not things you can always
overcome with a "methodology" or on a schedule.

 - Damian Conway, Perl God

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with squid Proxy

2023-07-12 Thread Andrés Leandro Regalado



hello Dear community,


I need help, because I implemented squid proxy in a small office to 
filter the internet and now it blocks the communication of the mail 
client with the mail server, I need to know how I can allow outlook or 
thunderbird to work through squid.


thank you.

--
Ing. Andrés Leandro Regalado
Clínica Independencia Norte
Tecnología
809-385-2787 ext. 348
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help to understand tcp_denied in access.log

2023-04-14 Thread Alex Rousskov

On 4/14/23 06:36, andre.bolin...@articatech.com wrote:


The mechanism is http_access, the size of error page is around 500kb.


>> Each TCP_DENIED request is consuming 40+ bytes

If your custom error page is around 500KB, then we should not be 
surprised that the corresponding %



The squid version is 5.8 and I'm not doing ssl bump for this domain.
When you ask to " collect a packet trace" is put squid in debug mode? Squid
-k debug?


No, I was thinking about something along these lines:

  tcpdump -s0 -w packet-trace.pcap ...

However, with your 500KB statement, there is no need for a packet trace 
because Squid is simply sending your large custom error responses to 
denied clients, as instructed. Mystery solved.


Please note that popular browsers will not display CONNECT error 
responses, but client behavior is client-dependent, so YMMV.


Do you want Squid to respond with your custom 500KB error page? If yes, 
there is nothing you need to do. Otherwise, please clarify what you want 
Squid to do instead.



HTH,

Alex.



-Mensagem original-
De: squid-users  Em Nome De Alex
Rousskov
Enviada: 14 de abril de 2023 04:01
Para: squid-users@lists.squid-cache.org
Assunto: Re: [squid-users] Help to understand tcp_denied in access.log

On 4/13/23 21:23, andre.bolin...@articatech.com wrote:


I'm seeing to many requests to website mainnet.infura.io, by analyzing
the access.log seams that the website is blocked


Which directive/mechanism blocks them (e.g., http_access,
reply_body_max_size, ICAP/eCAP, etc.)?



Each TCP_DENIED request is consuming 40+ bytes


Assuming you do not use huge custom TCP_DENIED error pages, I agree that
these entries look suspicious, as if Squid denied access but continued
to tunnel the traffic. The response times are fairly small, but probably
large enough to transmit those amounts of data from a fast server.

Since most requests (for the affected domain) are problematic, can you
collect a packet trace and see if you can confirm that these
transactions transmit a lot of data from Squid to the client? If IPs are
not enough, logging client TCP port (%>p) may help you match specific
access.log entries with TCP connections in the packet trace...


What Squid version are you using for this? Does SslBump affect the
problematic transactions?


Thank you,

Alex.




but I also notice that the
request is consuming bandwidth, here a example
Squid access.log format.
%ts.%03tu %6tr %>a %Ss/%03>Hs %
mac="%>eui"

%note ua="%{User-Agent}>h" exterr="%err_code|%err_detail"

Access.log request.
1681099742.517 35 10.81.216.114 TCP_DENIED_ABORTED/407 41154 CONNECT
mainnet.infura.io:443 - HIER_NONE/-:- text/html mac="00:00:00:00:00:00"


category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke

rs;%0D%0A ua="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
exterr="ERR_CACHE_ACCESS_DENIED|-"

1681099742.575 41 10.81.216.114 TCP_DENIED/407 511819 CONNECT
mainnet.infura.io:443 - HIER_NONE/-:- text/html mac="00:00:00:00:00:00"


category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke

rs;%0D%0A ua="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
exterr="ERR_CACHE_ACCESS_DENIED|-"

1681099742.664 73 10.81.216.114 NONE/200 0 CONNECT

mainnet.infura.io:443

HLBHO/tsyafiq HIER_NONE/-:- - mac="00:00:00:00:00:00"


category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke

rs;%0D%0Auser:%20HLBHO/tsyafiq%0D%0A ua="Mozilla/5.0 (Macintosh; Intel Mac
OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0
Safari/537.36" exterr="-|-"

1681099742.685 20 10.81.216.114 TCP_DENIED_ABORTED/403 450655 CONNECT
mainnet.infura.io:443 HLBHO/tsyafiq HIER_NONE/-:- text/html
mac="00:00:00:00:00:00"


category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke

rs;%0D%0Auser:%20HLBHO/tsyafiq%0D%0A ua="-" exterr="ERR_ACCESS_DENIED|-"

Each TCP_DENIED request is consuming 40+ bytes so at the end of the

day

sometimes I have a total of 56k request to mainnet.infura.io consuming
around 15GB of bandwidth.

My question is, assuming that %http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help to understand tcp_denied in access.log

2023-04-14 Thread andre.bolinhas
Hi Alex,
The mechanism is http_access, the size of error page is around 500kb.
The squid version is 5.8 and I'm not doing ssl bump for this domain.
When you ask to " collect a packet trace" is put squid in debug mode? Squid
-k debug?
Best regards

-Mensagem original-
De: squid-users  Em Nome De Alex
Rousskov
Enviada: 14 de abril de 2023 04:01
Para: squid-users@lists.squid-cache.org
Assunto: Re: [squid-users] Help to understand tcp_denied in access.log

On 4/13/23 21:23, andre.bolin...@articatech.com wrote:

> I'm seeing to many requests to website mainnet.infura.io, by analyzing 
> the access.log seams that the website is blocked

Which directive/mechanism blocks them (e.g., http_access,
reply_body_max_size, ICAP/eCAP, etc.)?


> Each TCP_DENIED request is consuming 40+ bytes 

Assuming you do not use huge custom TCP_DENIED error pages, I agree that 
these entries look suspicious, as if Squid denied access but continued 
to tunnel the traffic. The response times are fairly small, but probably 
large enough to transmit those amounts of data from a fast server.

Since most requests (for the affected domain) are problematic, can you 
collect a packet trace and see if you can confirm that these 
transactions transmit a lot of data from Squid to the client? If IPs are 
not enough, logging client TCP port (%>p) may help you match specific 
access.log entries with TCP connections in the packet trace...


What Squid version are you using for this? Does SslBump affect the 
problematic transactions?


Thank you,

Alex.



> but I also notice that the
> request is consuming bandwidth, here a example
> Squid access.log format.
> %ts.%03tu %6tr %>a %Ss/%03>Hs % %note ua="%{User-Agent}>h" exterr="%err_code|%err_detail"
> 
> Access.log request.
> 1681099742.517 35 10.81.216.114 TCP_DENIED_ABORTED/407 41154 CONNECT
> mainnet.infura.io:443 - HIER_NONE/-:- text/html mac="00:00:00:00:00:00"
>
category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke
> rs;%0D%0A ua="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
> exterr="ERR_CACHE_ACCESS_DENIED|-"
> 
> 1681099742.575 41 10.81.216.114 TCP_DENIED/407 511819 CONNECT
> mainnet.infura.io:443 - HIER_NONE/-:- text/html mac="00:00:00:00:00:00"
>
category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke
> rs;%0D%0A ua="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36"
> exterr="ERR_CACHE_ACCESS_DENIED|-"
> 
> 1681099742.664 73 10.81.216.114 NONE/200 0 CONNECT
mainnet.infura.io:443
> HLBHO/tsyafiq HIER_NONE/-:- - mac="00:00:00:00:00:00"
>
category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke
> rs;%0D%0Auser:%20HLBHO/tsyafiq%0D%0A ua="Mozilla/5.0 (Macintosh; Intel Mac
> OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0
> Safari/537.36" exterr="-|-"
> 
> 1681099742.685 20 10.81.216.114 TCP_DENIED_ABORTED/403 450655 CONNECT
> mainnet.infura.io:443 HLBHO/tsyafiq HIER_NONE/-:- text/html
> mac="00:00:00:00:00:00"
>
category:%20143%0D%0Acategory-name:%20Trackers%0D%0Aclog:%20cinfo:143-Tracke
> rs;%0D%0Auser:%20HLBHO/tsyafiq%0D%0A ua="-" exterr="ERR_ACCESS_DENIED|-"
> 
> Each TCP_DENIED request is consuming 40+ bytes so at the end of the
day
> sometimes I have a total of 56k request to mainnet.infura.io consuming
> around 15GB of bandwidth.
> 
> My question is, assuming that % TCP_DENIED is taking a lot of bandwidth to block a website?
> 
> Best regards
> 
> 
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help to understand tcp_denied in access.log

2023-04-13 Thread Alex Rousskov

On 4/13/23 21:23, andre.bolin...@articatech.com wrote:


I'm seeing to many requests to website mainnet.infura.io, by analyzing the
access.log seams that the website is blocked 


Which directive/mechanism blocks them (e.g., http_access, 
reply_body_max_size, ICAP/eCAP, etc.)?



Each TCP_DENIED request is consuming 40+ bytes 


Assuming you do not use huge custom TCP_DENIED error pages, I agree that 
these entries look suspicious, as if Squid denied access but continued 
to tunnel the traffic. The response times are fairly small, but probably 
large enough to transmit those amounts of data from a fast server.


Since most requests (for the affected domain) are problematic, can you 
collect a packet trace and see if you can confirm that these 
transactions transmit a lot of data from Squid to the client? If IPs are 
not enough, logging client TCP port (%>p) may help you match specific 
access.log entries with TCP connections in the packet trace...



What Squid version are you using for this? Does SslBump affect the 
problematic transactions?



Thank you,

Alex.




but I also notice that the
request is consuming bandwidth, here a example
Squid access.log format.
%ts.%03tu %6tr %>a %Ss/%03>Hs %http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help to understand tcp_denied in access.log

2023-04-13 Thread andre.bolinhas
Hi
I'm seeing to many requests to website mainnet.infura.io, by analyzing the
access.log seams that the website is blocked but I also notice that the
request is consuming bandwidth, here a example
Squid access.log format.
%ts.%03tu %6tr %>a %Ss/%03>Hs %http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with using Squid proxy and VPN at the same time.

2023-02-19 Thread Peter Hucker

On Mon, 20 Feb 2023 02:21:52 -, Amos Jeffries  wrote:


On 20/02/2023 10:26 am, Peter Hucker wrote:

I use a Squid proxy just for Boinc (I have 8 PCs and it caches the
downloads which all 8 machines get (large data files). I also use a
VPN. I want to tell the VPN to not put Squid through it (as for some
reason Boinc servers hate VPNs). I can tell the VPN to exclude certain
apps by giving it the name of the executable. But what executable in
Squid actually connects to the network?


The squid binary name is 'squid'.


Odd, I told Ivacy VPN to use split tunneling and not put 
"C:\Squid\bin\squid.exe" through the proxy.  Boinc is still sluggish.  Can I 
tell from the Squid logs if it's using the VPN?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with using Squid proxy and VPN at the same time.

2023-02-19 Thread Amos Jeffries

On 20/02/2023 10:26 am, Peter Hucker wrote:
I use a Squid proxy just for Boinc (I have 8 PCs and it caches the 
downloads which all 8 machines get (large data files). I also use a 
VPN. I want to tell the VPN to not put Squid through it (as for some 
reason Boinc servers hate VPNs). I can tell the VPN to exclude certain 
apps by giving it the name of the executable. But what executable in 
Squid actually connects to the network?


The squid binary name is 'squid'.

Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with using Squid proxy and VPN at the same time.

2023-02-19 Thread Peter Hucker

I use a Squid proxy just for Boinc (I have 8 PCs and it caches the downloads 
which all 8 machines get (large data files). I also use a VPN. I want to tell 
the VPN to not put Squid through it (as for some reason Boinc servers hate 
VPNs). I can tell the VPN to exclude certain apps by giving it the name of the 
executable. But what executable in Squid actually connects to the network?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help wanted to clean up and update the wiki

2022-11-30 Thread Francesco Chemolli
Hi all!
  In the last weeks I have started the process to clean up the squid wiki
and port it to a faster and more effective platform - the goal is to be
able to host it onsite or on github pages.

I have done the heavy lifting of collecting and auto-transcoding the
contents from MoinMon to Markdown, but as it always happens there's a lot
of cleanup to do, and at 500 pages it's a lot of work. You can find the
work in progress at https://kinkie.github.io/ and the underlying dataset at
https://github.com/kinkie/kinkie.github.io .

THE ASK
Is anyone willing to help, clean and migrate pages over?


HOW TO HELP
I am currently focusing on the ConfigExamples section of the wiki. Pages to
be cleaned up are in old/ConfigExamples. For each page in that subtree, the
"example as-is" content needs to be removed (it's now added by the
template engine), markdown needs to be cleaned up, and the page can be
moved to the docs/ subtree (in the same relative location); then send me a
PR with the changes.
Changes can be tested locally if you have jekyll installed on your system
along with the plugins listed in docs/_config.yml, by the running bin/serve
script

Thanks for any contributions, and please ask questions if something is
unclear, they will help me develop a more comprehensive how-to-help guide

-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help to redirect http request to another squid proxy

2021-02-28 Thread Amos Jeffries

On 26/02/21 8:47 pm, jmpatagonia wrote:
Hello I need help to redirect request http/https from a specific domain 
to another squid proxy server.


Like a domain for example microsoft.com  redirect 
o transfer all request to another squid proxy server.




Firstly, "redirect" has a meaning in HTTP and it has nothing to do with 
what you seem to be wanting to do.


You are using cache_peer, which is the directive you need to be looking 
at for the "HTTP routing" that meets your need.


However, there are some things that look odd in the details you 
provided. I suspect they are related to other things not mentioned that 
you have dome to the system setup thinking "redirect" was needed.





I try to use this:
#
http_port  xx.xx.xx..xx:8080 accel



This is a reverse-proxy. The CONNECT method is not meant to be sent to 
reverse-proxies.




acl microsoft_acl dstdomain microsoft.com 
cache_peer yy.yy.yy.yy  parent 8080  0  name=proxy60 default


"default" indicates the peer is able to handle any traffic and can be 
used as a backup route when DIRECT fails.


You should not use that option on a peer where only certain site/domain 
are serviced.




cache_peer_access proxy60 allow  microsoft_acl
cache_peer_access proxy60 deny all
#
but not wok, error :
26/Feb/2021:07:23:27 -0300 || - || xx.xx.xx.xx || TAG_NONE/405|| CONNECT 
|| error:method-not-allowed || text/html




What does your logformat define all those log fields to mean ?
  There are 4 IP addresses involved in a proxy transaction and it is 
unclear why the xx.xx.xx.xx is the one being logged.



Why are you using port 8080?
  Last I saw Microsoft do not host their website using port 8080.

Hint: for "just an example name" use the domains which are registered 
specially for that purpose: example.com, example.net, example.org. It 
avoids confusing us into thinking microsoft.com is *actually* the domain 
you are hosting - there are implications to hosting their site which 
change the answers you could get.




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help to redirect http request to another squid proxy

2021-02-26 Thread jmpatagonia
Hello I need help to redirect request http/https from a specific domain to
another squid proxy server.

Like a domain for example microsoft.com redirect o transfer all request to
another squid proxy server.

I try to use this:
#
http_port  xx.xx.xx..xx:8080 accel
acl microsoft_acl dstdomain microsoft.com
cache_peer yy.yy.yy.yy  parent 8080  0  name=proxy60 default
cache_peer_access proxy60 allow  microsoft_acl
cache_peer_access proxy60 deny all
#
but not wok, error :
26/Feb/2021:07:23:27 -0300 || - || xx.xx.xx.xx || TAG_NONE/405|| CONNECT ||
error:method-not-allowed || text/html

regards.
Juan Manuel.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with with delay pools

2020-10-14 Thread Amos Jeffries
On 15/10/20 7:52 am, Service MV wrote:
> Hello everyone, I don't know if anyone can help me with this configuration.
> 
> acl Domain_Users note group AQUAAAUV7TIfbORUj8PLQv4YAQIAAA==
> delay_pools 1
> delay_class 1 1
> delay_parameters 1 250/250
> delay_access 1 allow Domain_User
> 
> What I am looking for is to limit each individual user to 20 Mbit/s. But
> I don't know if I'm really limiting all users to 20 Mbit/s with this
> configuration.
> Please, if someone with more experience could tell me if I am doing it
> right?


You are not. The above limits all members of that group across the
entire network to share 19/Mbit/s.

To fix:

* for 20Mbit/s absolute speed set -1/2621440. That means maximum of
20Mbit (2621440) can be available for use, and fully refill the
available amount each second.

* for per-username limits set a class 4 pool with "none" (or older Squid
"-1/-1") for the limit parameters your policy does not care about.


So it should look like:

 delay_pools 1
 delay_class 1 4
 delay_parameters 1 none none none -1/2621440
 delay_access 1 allow Domain_User


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with with delay pools

2020-10-14 Thread Service MV
Hello everyone, I don't know if anyone can help me with this configuration.

acl Domain_Users note group AQUAAAUV7TIfbORUj8PLQv4YAQIAAA==
delay_pools 1
delay_class 1 1
delay_parameters 1 250/250
delay_access 1 allow Domain_User

What I am looking for is to limit each individual user to 20 Mbit/s. But I
don't know if I'm really limiting all users to 20 Mbit/s with this
configuration.
Please, if someone with more experience could tell me if I am doing it
right?
Thank you very much in advance.

PS.: I only have that doubt, the note acl is already matching transaction
annotation of negotiate_kerberos_auth helper

squid -v
Squid Cache: Version 5.0.3
Service Name: squid

This binary uses OpenSSL 1.1.1d  10 Sep 2019. For legal restrictions on
distribution see https://www.openssl.org/source/license.html

configure options:  '--prefix=/opt/squid-503' '--includedir=/include'
'--mandir=/share/man' '--infodir=/share/info'
'--localstatedir=/opt/squid-503/var' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules' '--enable-inline'
'--enable-async-io' '--enable-storeio=ufs,aufs,diskd'
'--enable-removal-policies=lru,heap' '--enable-delay-pools'
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
'--enable-follow-x-forwarded-for' '--enable-auth-basic=fake,LDAP'
'--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group'
'--enable-arp-acl' '--enable-esi--disable-translation'
'--with-logdir=/var/log/squid-503' '--with-pidfile=/var/run/squid-503.pid'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-linux-netfilter'
'--enable-ltdl-convenience' '--with-openssl' '--enable-ssl'
'--enable-ssl-crtd'
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help Request: How to deal with Basic Authentication

2020-09-17 Thread Amos Jeffries
FYI;
 if this file is only accessed by the Squid auth helper (usually the
case) it should be in /etc/squid or a sub-dir under there and have the
proxy group read access (no write). Ownership should be root or an admin
account with permission to add/remove entries, Squid does not need those
permissions.

If it is shared with other systems, then there should be an appropriate
group that Squid can be added to gain read-only access for validating
the credentials in it.

Amos


On 17/09/20 11:34 pm, Wind Lee wrote:
> Thanks Amos, problems has been fixed, it's because of my passwd file
> couldn't be read by user squid, I wrongly placed it at root user's home
> directory and forgot to change its owner attributes.
> 
> On 2020/9/17 6:34 PM, Amos Jeffries wrote:
>> I see Squid being told to accept valid credentials. What about missing
>> ones? invalid ones? garbage credentials?
>>
>> Best practice for auth is to deny all non-valid credentials before
>> accepting.
>>
>>    http_access deny !auth
>>    http_access allow localnet
>>
>>
>> Amos
>>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help Request: How to deal with Basic Authentication

2020-09-17 Thread Wind Lee
Thanks Amos, problems has been fixed, it's because of my passwd file 
couldn't be read by user squid, I wrongly placed it at root user's home 
directory and forgot to change its owner attributes.


On 2020/9/17 6:34 PM, Amos Jeffries wrote:

I see Squid being told to accept valid credentials. What about missing
ones? invalid ones? garbage credentials?

Best practice for auth is to deny all non-valid credentials before
accepting.

   http_access deny !auth
   http_access allow localnet


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help Request: How to deal with Basic Authentication

2020-09-17 Thread Amos Jeffries
On 17/09/20 5:22 pm, Wind Lee wrote:
> Hi all,
> 
> I'm trying to set up a http(s) proxy with Basic Authentication, for now
> it works fine without auth, but as long as I add those auth part, it
> keeps rejecting auth request from client side, such as keeps requesting
> username and password on google chrome.
> 

What do the Squid logs say is going on?

> I've checked the /usr/lib64/squid/basic_ncsa_auth /PATH/TO/PASSWD_FILE
> in console, and it returns OK when I type correct username/password.
> 
> Distribution is CentOS 7, squid version is 4.9
> 

Please upgrade to 4.13.


> I really don't know what to do next, here's the configuration:
> 
> https://paste.ubuntu.com/p/SXf6tN8cCg/
> 

I see Squid being told to accept valid credentials. What about missing
ones? invalid ones? garbage credentials?

Best practice for auth is to deny all non-valid credentials before
accepting.

  http_access deny !auth
  http_access allow localnet


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help Request: How to deal with Basic Authentication

2020-09-16 Thread Wind Lee

Hi all,

I'm trying to set up a http(s) proxy with Basic Authentication, for now 
it works fine without auth, but as long as I add those auth part, it 
keeps rejecting auth request from client side, such as keeps requesting 
username and password on google chrome.


I've checked the /usr/lib64/squid/basic_ncsa_auth /PATH/TO/PASSWD_FILE 
in console, and it returns OK when I type correct username/password.


Distribution is CentOS 7, squid version is 4.9

I really don't know what to do next, here's the configuration:

https://paste.ubuntu.com/p/SXf6tN8cCg/

Thanks.

Wind Lee.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help

2020-06-27 Thread Matus UHLAR - fantomas

On 30.05.20 16:35, santosh panchal wrote:

We have setup outbound proxy in AWS for private infra

We have put required entry in /etc/profile


what "required entry"?


and try to install package on
ubuntu machine but getting error as it is not going over the internet

Error
Connecting to AP-south-1.ec2.archive.ubuntu.com

Also we are unable to curl google.com even after whitelisting the domain in
Cloudformation template


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help

2020-06-26 Thread santosh panchal
Hi Team

We have setup outbound proxy in AWS for private infra

We have put required entry in /etc/profile and try to install package on
ubuntu machine but getting error as it is not going over the internet

Error
Connecting to AP-south-1.ec2.archive.ubuntu.com

Also we are unable to curl google.com even after whitelisting the domain in
Cloudformation template

Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with FTP native proxy squid 3.5

2020-04-30 Thread Alex Rousskov
On 4/29/20 3:45 PM, Dawood Aijaz wrote:

> I am able to configure an FTP proxy through HTTP however I need a native
> FTP. I was told squid supports as of Cv3.5.But I am unable to find any
> help regarding configuration and any tutorial to help me do this task
> 
> Can anyone share configuration for setting up native FTP proxy,

Here is one example:

ftp_port 21

Please see http://www.squid-cache.org/Doc/config/ftp_port/ for details.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with FTP native proxy squid 3.5

2020-04-29 Thread Dawood Aijaz
Hi,
I am able to configure an FTP proxy through HTTP however I need a native
FTP. I was told squid supports as of Cv3.5.But I am unable to find any help
regarding configuration and any tutorial to help me do this task

Can anyone share configuration for setting up native FTP proxy,

Regards,
Dawood Aijaz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help regarding configuring a native FTP proxy

2020-04-27 Thread Matus UHLAR - fantomas

On 27.04.20 18:46, Dawood Aijaz wrote:

After Amos Jeffries pointed out that there is native FTP support in squid
as of Cv3.5.But I am unable to find any help regarding configuration and
any tutorial to help me do this task

Can anyone share configuration for setting up native FTP proxy,


I believe this requires ftp_port and standard access directives further.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
How does cat play with mouse? cat /dev/mouse
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help regarding configuring a native FTP proxy

2020-04-27 Thread Dawood Aijaz
Hi,
After Amos Jeffries pointed out that there is native FTP support in squid
as of Cv3.5.But I am unable to find any help regarding configuration and
any tutorial to help me do this task

Can anyone share configuration for setting up native FTP proxy,

Regards,
Dawood Aijaz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with TC_MISS/200

2020-04-06 Thread Amos Jeffries
On 7/04/20 10:13 am, Juan Manuel P wrote:
> Hello a implementing a reverse transparent  proxy, connected directed to
> internet with round-robin balance to two internal again reverse
> transparent proxy.
> 

There is no such thing as "reverse transparent proxy".

"reverse proxy" and "transparent proxy" have incompatible requirements
at the HTTP semantic behaviour level.

Also, "round-robin balance" and "transparent proxy" have mutually
exclusive packet routing requirements at the TCP level.

What exactly have you configured? at both proxy levels, and in the
networking systems between them?

> 
> A note that allways my request is serving by a 0 TCP_ MISS/200, on three
> squids
> 
> That mean never take from cache ?
> 

That is what it means, yes.


> Squid conf:
> 
> cache_dir ufs /var/spool/squid-app/ 300 16 256
> 

This cache cannot store more than 300MB total data. That may or may not
be relevant. We will need to see the HTTP messages at both client and
server connections for at least the proxy first receiving HTTP requests,
maybe the backends as well.

For current Squid versions you can retrieve those details from cache.log
after configuring "debug_options 11,2". Make sure the cache is empty
first, make an HTTP request for one object, wait 10sec then after that
transaction *finishes*, then make a second identical request.

If the URLs are publicly accessible, you can use the tool at redbot.org
to fetch them and it will report details about the HTT response
cacheability and any problems with the HTTP syntax.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help with TC_MISS/200

2020-04-06 Thread Juan Manuel P
Hello a implementing a reverse transparent  proxy, connected directed to
internet with round-robin balance to two internal again reverse transparent
proxy.



  -  parent one (reverse & balance )

ONE CHILD  (reverse & balance)

   - parent two (reverse &
balance)



A note that allways my request is serving by a 0 TCP_ MISS/200, on three
squids

That mean never take from cache ?

Squid conf:

cache_dir ufs /var/spool/squid-app/ 300 16 256


Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with squid proxy parent directive

2019-11-26 Thread jmperrote


Hello we are trying to configure two squid reverse proxy, one frontend 
to internet and the other inside the network.

Both on reverse proxy mode, and one dependening the other.

    internet ---> squid reverse proxy one ---> squid reverse proxy two 
---> inside web server


Accessing to first reverse proxy is ok, when try to acces to second 
proxy have a error ((104) Connection reset by peer)


                El sistema ha devuelto: (104) Connection reset by peer
                Se ha producido un error al leer datos de la red. Por 
favor, inténtelo de nuevo.

                Su administrador del caché es soporte@xxx.

Regards.


Config:

    Squid 3.5

    enabled ssl






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with HTTPS SQUID 3.1.23 https proxy not working

2019-09-22 Thread KleinEdith
Thanks for help me, I fix my problem now I can see  SuCarroRD.com
Bing.com    and more.
Thanks for your Help. I will recommend this site to my another friends. Have
good day



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with HTTPS SQUID 3.1.23 https proxy not working

2019-09-21 Thread Matus UHLAR - fantomas

On 21.09.19 02:51, KleinEdith wrote:

Squid as the https proxy not working

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#acl localnet src fc00::/7   # RFC 4193 local private network range
#acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl localnet src 10.0.0.188 # David Computer
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

acl bad_urls dstdomain "/etc/squid/blacklisted_sites.acl"
acl good_url dstdomain "/etc/squid/good_sites.acl"
#http_access deny bad_url

I can´t connect to:

Outlook.com 
Yahoo.com 
SuCarroRD.com 
Gmail.com 
Bing 


You haven't post whole squid config, did you?
If you did, you definitely need to llow some access (for limited set of
IPs), because the default is deny.



--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with HTTPS SQUID 3.1.23 https proxy not working

2019-09-21 Thread KleinEdith
Squid as the https proxy not working

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#acl localnet src fc00::/7   # RFC 4193 local private network range
#acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl localnet src 10.0.0.188 # David Computer
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

acl bad_urls dstdomain "/etc/squid/blacklisted_sites.acl"
acl good_url dstdomain "/etc/squid/good_sites.acl"
#http_access deny bad_url

I can´t connect to:

Outlook.com    
Yahoo.com   
SuCarroRD.com   
Gmail.com   
Bing   

And more. I need help please to fix this problem 




--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with helper

2019-09-03 Thread Amos Jeffries
On 4/09/19 1:13 am, jmperrote wrote:
> Hello Amos, yes but how can I identified that is on the first request ??
> 

It will be first? but what does first actually mean?
  first this year? first today? first this second?

HTTP is stateless. There is no concept of "second request" etc. outside
of feature which are *not* related to users or useful to you here.

_Every_ request that your config requires credentials to accept, needs
credentials provided or will get a 401/407 response. That is just how
auth works in HTTP. There are likely many of those which are handled by
the Browser without any popup at all.
 To Squid there is no difference between request 1 without credentials
and request 2 without credentials.


> Else squid request to autentificate and later when invoque the helper
> again request to autentificate.

Every time Squid is handed never-before-seen credentials the helper will
be asked to check them.

Every time Squid is handed credentials that are apparently expired, the
helper will be asked to check them.


> 
> I handle recover the user from squid cache (cachmanager) on the helper,
> for asking if the user previous exist, but squid refresh cache and users
> disapearing time to time.

Yes. Computers do not have infinite memory. Things that are clearly
obsolete are thrown away after a reasonable time.


To make credentials stick around longer you can do two things;

 1) increase their TTL. The longer they are considered valid the longer
they are retained as possibly useful.

 Pros: they stick around. Less CPU load on the auth system.

 Cons: they stick around. Increased memory usage. Reduced ability to
change passwords. Reduced ability to kick malicious users off the proxy
by disabling hacked credentials.


 2) increase the garbage collection interval Squid uses. This keeps
obsolete logins around longer.

 Pros: more known logins.

 Cons: more memory used storing logins.


Both have the possibility/risk that users "login session" goes longer
than you might be expecting.

For example; if set to 10hrs (one working day). A user may "logout" late
one night, then re-login early the next day (9hrs of sleep later) and be
seen by Squid as having continued the same login started yesterday.
 Even 2hrs is too long to cover lunch breaks etc.
 Up to you of course, just consider what type of activities may be
problematic for your system for any given time range.

> 
> The exact question is: how to know is the user is previous logued, so
> the helper just validate user/password 

Yes.

> and later ALLOW to continue.
> 

No.

Authentication vs Authorization. There is a thin difference, but it is
very important to understanding these things going on.

The auth helper only does Authentication - checking that credentials are
*correct*.

Squid ACLs do the Authorization - allow/deny actions. Which may (or not)
be based on whether credentials are correct / authenticated.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with helper

2019-09-03 Thread jmperrote

Hello Amos, yes but how can I identified that is on the first request ??

Else squid request to autentificate and later when invoque the helper 
again request to autentificate.


I handle recover the user from squid cache (cachmanager) on the helper, 
for asking if the user previous exist, but squid refresh cache and users 
disapearing time to time.


The exact question is: how to know is the user is previous logued, so 
the helper just validate user/password and later ALLOW to continue.


Regards.


El 3/9/19 a las 09:41, Amos Jeffries escribió:

On 3/09/19 10:35 pm, jmperrote wrote:

Hello we have a helper to validate users on squid reverse proxy, and
have a problem on the first validation time !!

On a normal day the first validation, when a user open the client
browser squid invoque the pop/up and users insert user/password correct
to validate, and later squid

apparently run the helper requesting again the user and password.

I need help to know it is possible to identify when a users run on first
time.

Users cannot be identified until they provide credentials.

This being a reverse-proxy means to the Browser it is no different than
any web server. No sane software will ever blindly assume that the users
LAN account credentials are going to be valid when connecting to a
random web server.

Thus the Browser needs to have stored credentials in its password
manager for that 'website' being hosted by your proxy, or use the popup
to discover them when it starts to do traffic there.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with helper

2019-09-03 Thread Amos Jeffries
On 3/09/19 10:35 pm, jmperrote wrote:
> Hello we have a helper to validate users on squid reverse proxy, and
> have a problem on the first validation time !!
> 
> On a normal day the first validation, when a user open the client
> browser squid invoque the pop/up and users insert user/password correct
> to validate, and later squid
> 
> apparently run the helper requesting again the user and password.
> 
> I need help to know it is possible to identify when a users run on first
> time.

Users cannot be identified until they provide credentials.

This being a reverse-proxy means to the Browser it is no different than
any web server. No sane software will ever blindly assume that the users
LAN account credentials are going to be valid when connecting to a
random web server.

Thus the Browser needs to have stored credentials in its password
manager for that 'website' being hosted by your proxy, or use the popup
to discover them when it starts to do traffic there.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help with helper

2019-09-03 Thread jmperrote
Hello we have a helper to validate users on squid reverse proxy, and 
have a problem on the first validation time !!


On a normal day the first validation, when a user open the client 
browser squid invoque the pop/up and users insert user/password correct 
to validate, and later squid


apparently run the helper requesting again the user and password.


I need help to know it is possible to identify when a users run on first 
time.



regards.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with IP forwarder on squid

2019-08-28 Thread Matus UHLAR - fantomas

On 28.08.19 09:22, jmperrote wrote:
Hello Matus thanks for the answer, but on the apache backend server we 
just receip request from the reverse proxy, and we mounted software 
for DDOS on the apache server, so we need to identified the ip from 
reverse proxy for DDOS work.


and this is eaxctly why I said you must configure apache to accept
X-Forwarder-For header from squid, so apache knows which real IP connects
from the outside.  And I said the proper module is mod_remoteip or something
similar.

However, yout anti-ddos software should connect to squid, not to apache. Or,
your squid server might be useless in that setup.


El 28/8/19 a las 08:40, Matus UHLAR - fantomas escribió:

On 28.08.19 07:59, jmperrote wrote:
Hello we have a reverse proxy squid and on the backend a apache 
server with anti DDOS software.


Any request on the apache comming from the same ip of the reverse 
proxy because it is forwader to the apache backend.


We need that the apache server receip the original ip from client.

We try  the "forwarder_for on" directive at the top of squid.conf 
but not result.


you must configure apache to accept that IP as the original IP.
I think it has mod_remoteip or something like that.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
"One World. One Web. One Program." - Microsoft promotional advertisement
"Ein Volk, ein Reich, ein Fuhrer!" - Adolf Hitler
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with IP forwarder on squid

2019-08-28 Thread jmperrote
Hello Matus thanks for the answer, but on the apache backend server we 
just receip request from the reverse proxy, and we mounted software for 
DDOS on the apache server, so we need to identified the ip from reverse 
proxy for DDOS work.


regards.


El 28/8/19 a las 08:40, Matus UHLAR - fantomas escribió:

On 28.08.19 07:59, jmperrote wrote:
Hello we have a reverse proxy squid and on the backend a apache 
server with anti DDOS software.


Any request on the apache comming from the same ip of the reverse 
proxy because it is forwader to the apache backend.


We need that the apache server receip the original ip from client.

We try  the "forwarder_for on" directive at the top of squid.conf but 
not result.


you must configure apache to accept that IP as the original IP.
I think it has mod_remoteip or something like that.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with IP forwarder on squid

2019-08-28 Thread Matus UHLAR - fantomas

On 28.08.19 07:59, jmperrote wrote:
Hello we have a reverse proxy squid and on the backend a apache server 
with anti DDOS software.


Any request on the apache comming from the same ip of the reverse 
proxy because it is forwader to the apache backend.


We need that the apache server receip the original ip from client.

We try  the "forwarder_for on" directive at the top of squid.conf but 
not result.


you must configure apache to accept that IP as the original IP.
I think it has mod_remoteip or something like that.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
- Holmes, what kind of school did you study to be a detective?
- Elementary, Watkins.  -- Daffy Duck & Porky Pig
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with IP forwarder on squid

2019-08-28 Thread jmperrote
Hello we have a reverse proxy squid and on the backend a apache server 
with anti DDOS software.


Any request on the apache comming from the same ip of the reverse proxy 
because it is forwader to the apache backend.


We need that the apache server receip the original ip from client.

We try  the "forwarder_for on" directive at the top of squid.conf but 
not result.



regards.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help to disconnect users after determinated time. TTL

2019-08-16 Thread Amos Jeffries
On 16/08/19 3:30 am, jmperrote wrote:
> Hello Emmanuel, we finish implementing a solution on PHP script, getting
> the TTL time < 0 on the cachemgr, and it work.
> 
> The problem is that the param --> auth_param basic credentialsttl 3
> minutes, give this time (180 seconds), but if the user still navigating
> on the site, this value
> 
> "Check TTL" is not renewing when the user is navigating, so if the user not 
> aplly any click on the page just when the counter "Check TTL" is 0, the user 
> counter go to < 0.
> 
> 
> It is posible introduce any param that tell to squid to renew the counter 
> when a user is betwen the credentialsttl time and still navigating ?

credentiaslttl does not mean what you seem to think it does.

It is just an optimization to reduce the amount of lookups to the
helper. How often they are *checked*.

In your other thread you showed this report:

>
>   TypeState Check TTL Cache TTL Username
>   --- - - -
--
>   AUTH_BASIC  Ok583598  prueba


Think of credentialsTTL ("Check TTL") hitting 0 as the start of that
"grace period". The cache garbage collection (Cache TTL) defines the end
- when the credentials are completely forgotten by Squid.

As you can see there is already a "grace period" of 3540 seconds on
these credentials.


As Emmanuel said you can fake a sort-of logout by having a custom helper
pretend the credentials have expired suddenly. But that is something
your helper does, not this TTL.

Keep in mind that while Squid is awaiting your helper response all new
HTTP requests using those credentials will be queued up waiting for its
response. When the helper responds its answer will be applied to all
those queued and all future requests until the next credentialsttl
period ends.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help to disconnect users after determinated time. TTL

2019-08-15 Thread jmperrote
Hello Emmanuel, we finish implementing a solution on PHP script, getting 
the TTL time < 0 on the cachemgr, and it work.


The problem is that the param --> auth_param basic credentialsttl 3 
minutes, give this time (180 seconds), but if the user still navigating 
on the site, this value


"Check TTL" is not renewing when the user is navigating, so if the user not aplly any click 
on the page just when the counter "Check TTL" is 0, the user counter go to < 0.


It is posible introduce any param that tell to squid to renew the counter when 
a user is betwen the credentialsttl time and still navigating ?

regards.
 




El 13/8/19 a las 12:33, FUSTE Emmanuel escribió:

Hello,

Le 13/08/2019 à 17:06, jmperrote a écrit :

Hello Emmanuel regards for your answer.

We need a solution that if the user do not nothing for about a period
of time, for security reason, the reverse proxy request again the
authentication, how can resolv that ?

You need to generate a failed auth to force client cache expiration/auth
popup.
So you need to manage your own intermediate cache/TTL in your PHP script.

Put squid credentialttl at 5 minute.
Squid will call your authenticator two times in ten minutes on an active
"session" but zero time on a stale one. Issue an auth fail the next time
even if the auth is ok in this case.
Disable negative caching on squid to get it work.

But  it is not very robust :
At startup you will need two auth/popup to successfully connect
Many pages do requests on your back, reseting the TTL
Etc 

As http is stateless, it is more difficult as it sound.
Perhaps something is doable with  kerberos/ticket authentication scheme,
but I did not look at.

Emmanuel.

We use aut_param basic with php script (ldap repository) for
authentication.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help to disconnect users after determinated time. TTL

2019-08-13 Thread FUSTE Emmanuel
Le 13/08/2019 à 16:44, jmperrote a écrit :
> Hello, we have a squid reverse proxy, and use the param "auth_param 
> basic credentialsttl 10 minutes" to disconnect users that are inactive 
> for a time, but this NOT work, because later a users validated on a 
> reverse proxy can continue navigating on a reverse proxy even of later 
> 10 minutes of inactivity.
>
Hello,
It is not how things works.
You could not achieve what you want with basic auth.
The TTL is the TTL of the cache between the source of authentication 
(file/ldap/sql etc ...) and Squid.
The client authenticate itself on your back at each request because it 
cache auth material. There is no notion of "disconnection" from the 
server side. It could only be a client side policy if implemented in the 
browser.

Emmanuel.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help to disconnect users after determinated time. TTL

2019-08-13 Thread jmperrote
Hello, we have a squid reverse proxy, and use the param "auth_param 
basic credentialsttl 10 minutes" to disconnect users that are inactive 
for a time, but this NOT work, because later a users validated on a 
reverse proxy can continue navigating on a reverse proxy even of later 
10 minutes of inactivity.


And the users can continue navigating day to day and not need to 
revalidated if the browser is not closed.


Watching the Cache Manager menu --> Active Cached Usernames --> Check 
TTL, the check TTL is decrecing but when arrive to 0 is continue 
decrecing with - minus values. We observe that when user refresh the 
browser the Check TTL go to the value of credentialsttl setting (in 
seconds) and start to decrecing.



regards.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with HTTPS SQUID 3.1.23

2019-06-27 Thread Amos Jeffries
On 27/06/19 1:29 am, Anderson Rosario wrote:
> I can not access to HTTPS sites, 3 weeks ago was working fine, without
> doing any change in the topology update or config stopped and it is not
> working with HTTPS sites. it keeps loading and I recieve a message from
> navegators The connection to the server was reset while the page was
> loading.
> 

Your Squid is not doing anything with HTTPS at all. It lets CONNECT
tunnels through - provided the server name meets your required ACLs.

It may be related to Browser changes in how they handle non-200
responses to CONNECT since your access controls all require a login to
take place.


> *here my squid config:*
> 
> #
> visible_hostname proxy.local.local

Really .local.local ?


...> http_port 3128
> 
> # AD AUTH ###
> auth_param basic program /usr/lib/squid/squid_ldap_auth -R -b
> "dc=local,dc=LOCAL" -D "cn=squid,ou=proxy,dc=local,dc=LOCAL" -w "123456"
> -f sAMAccountName=%s -h 192.168.0.213
> 
> auth_param basic children 5
> auth_param basic realm Inserte su usuario de Windows para navegar
> auth_param basic credentialsttl 1 hour
> 
> external_acl_type ldap_group %LOGIN /usr/lib/squid/squid_ldap_group -R
> -b "dc=local,dc=LOCAL" -D "cn=squid,ou=proxy,dc=local,dc=LOCAL" -w
> "123456" -f "(&(objectclass=person)
> (sAMAccountName=%v)(memberof=cn=%a,ou=proxy,dc=local,dc=LOCAL))" -h
> 192.168.0.213
> ##
> 
> ## ALCs que definen los grupos ##
> acl nivel0 external ldap_group nivel0
> acl nivel1 external ldap_group nivel1
> acl nivel2 external ldap_group nivel2
> acl nivel3 external ldap_group nivel3
> acl nivel4 external ldap_group nivel4
> acl nivel5 external ldap_group nivel5
> acl nivel6 external ldap_group nivel6
> 
> #
> 
...
> #
> 
> ## Reglas de acceso ##
> 
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> 
> 
> #
> http_access allow nivel6
> http_access allow nivel5
> http_access allow nivel4
> http_access allow nivel3 !rule3 !desc3 !rule7 !desc7
> http_access allow nivel2 !rule2 !desc2 !rule7 !desc7
> http_access deny nivel1 !rule1
> http_access allow nivel1 !desc1 !rule7 !desc7
> http_access deny nivel0
> http_access deny all
> ##
> 
> 

Due to the "deny all" being above the http_access lines below do anything.

What this means is that external parties *are* allowed to access the
proxy management reports and potentially private info about other clients.

 ... not only is the below recommended *minimum* config. It is supposed
to be listed early like the Safe_ports and SSL_ports rules in order to
protect your network from attacks.


> # Recommended minimum Access Permission configuration:
> #
> # Only allow cachemgr access from localhost
> http_access allow manager localhost
> http_access deny manager
> 

HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-27 Thread Amos Jeffries
On 27/06/19 11:39 am, Jared Fox wrote:
> Hi Amos
> 
> So i have tried the following based on your suggestions, but it is
> still failing and have errors below:
> 
> 1. Switched to a wildcard whitelist instead of single domain
> 2. Updated the logformat to provide more information, see below:
> 3. Add in `--client-requested`, but this made no difference.
>3a. Add to single ACL, acl domainIsWhitelisted ssl::server_name
> --client-requested cloudtrace.googleapis.com
>3b. Commented out single record, switched to wildcard
>3c. Add to wildcard
> 
> Error messages and Logs:
> 
> Access Log: 26/Jun/2019:23:18:38 96 REDACTED 216.58.200.106
> NONE/200 0 CONNECT 216.58.200.106:443 HTTP/1.1 SSL:
> cloudtrace.googleapis.com peek Client(Subject/Tx/Neg/Sup/Cip): -
> TLS/1.0 - TLS/1.2 - Server(Subject/Rx/Neg/Sup/Cip): - TLS/1.2 -
> TLS/1.2 -
> 
> Cache Log: 2019/06/26 23:18:38 kid1| ERROR: negotiating TLS on FD
> 11: error:140920F8:SSL routines:ssl3_get_server_hello:unknown cipher
> returned (1/-1/0)
> 

This means the OpenSSL library being used by Squid does not contain any
support for the cipher(s) the server chose to use for this transaction.

They only way I am aware of to avoid it is to upgrade the OpenSSL
library Squid is built against.


> Can you please explain what you mean? What should this changed to so
> that it does work.
> 
>> Please be aware that in your config the ssl::server_name ACL is *not* 
>> matching the SNI in your config.
>> - Your ssl_bump rules say "peek all" - so peek happens on the two Hello
>> messages. When the serverHello has been peek'd the real server name is
>> available from the servers own certificate.
> 

To quote the ssl::server_name documentation:

"
# The ACL computes server name(s) using such information sources as
# CONNECT request URI, TLS client SNI, and TLS server certificate
# subject (CN and SubjectAltName). The computed server name(s) usually
# change with each SslBump step, as more info becomes available:
# * SNI is used as the server name instead of the request URI,
# * subject name(s) from the server certificate (CN and
#   SubjectAltName) are used as the server names instead of SNI.
"

That last bullet point is what is/was happening with your original proxy
config.

The "--client-requested" flag overrides that and causes the SNI to be
used in the match even when server cert is known.


> Updated Squid.conf.
> 
> # ===
> # Squid 4.7 Config - Work in Progress
> # ===
> 
> acl localnet src 10.0.0.0/8 # Kubernetes VPC CIDR range
> acl SSL_ports port 443  # HTTPS
> acl Safe_ports port 80   # HTTP
> acl Safe_ports port 443 # HTTPS
> acl CONNECT method CONNECT   # Traffic restriction
> acl step1 at_step SslBump1  # Needed by ssl-bump
> 
> # ---
> # Whitelist the following Domains
> # ---
> # FQDN - Try to use FQDN
> acl domainIsWhitelisted ssl::server_name accounts.google.com
> 
> # --
> # Wildcard
> acl domainIsWhitelisted ssl::server_name --client-requested .googleapis.com
> acl domainIsWhitelisted ssl::server_name --client-requested
> .googleapis.l.google.com
> # ---
> 
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
> 
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
> http_access allow localnet
> http_access allow localhost
> 
> # And finally deny all other access to this proxy
> http_access deny all
> 
> # Passively Intercepted HTTPS Traffic
> https_port 9091 cert=/etc/squid/example.com.cert
> key=/etc/squid/example.com.private ssl-bump intercept
> acl step1 at_step SslBump1
> ssl_bump peek all
> ssl_bump splice domainIsWhitelisted
> ssl_bump terminate all
> 
> # Leave coredumps in the first cache dir
> coredump_dir /var/spool/squid
> 
> # Logging
> logformat custom1 %tg %6tr %>a %Hs % SSL: %ssl::>sni %ssl::bump_mode Client(Subject/Tx/Neg/Sup/Cip):
> %ssl::>cert_subject %ssl::>received_hello_version
> %ssl::>negotiated_version %ssl::>received_supported_version
> %ssl::>negotiated_cipher Server(Subject/Rx/Neg/Sup/Cip):
> %ssl:: %ssl:: %ssl:: access_log daemon:/var/log/squid/access_custom1.log custom1
> 
> # Listen on port 3128 for HTTP Connet method - unused and firewalled off.
> http_port 3128


NP: this is not about CONNECT method. It is about serving up error
pages, FTP listings, and all the icons/scripts/stylesheets etc embedded
in those.

Amos
___
squid-users mailing list

Re: [squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-26 Thread Jared Fox
Hi Amos

So i have tried the following based on your suggestions, but it is
still failing and have errors below:

1. Switched to a wildcard whitelist instead of single domain
2. Updated the logformat to provide more information, see below:
3. Add in `--client-requested`, but this made no difference.
   3a. Add to single ACL, acl domainIsWhitelisted ssl::server_name
--client-requested cloudtrace.googleapis.com
   3b. Commented out single record, switched to wildcard
   3c. Add to wildcard

Error messages and Logs:

Access Log: 26/Jun/2019:23:18:38 96 REDACTED 216.58.200.106
NONE/200 0 CONNECT 216.58.200.106:443 HTTP/1.1 SSL:
cloudtrace.googleapis.com peek Client(Subject/Tx/Neg/Sup/Cip): -
TLS/1.0 - TLS/1.2 - Server(Subject/Rx/Neg/Sup/Cip): - TLS/1.2 -
TLS/1.2 -

Cache Log: 2019/06/26 23:18:38 kid1| ERROR: negotiating TLS on FD
11: error:140920F8:SSL routines:ssl3_get_server_hello:unknown cipher
returned (1/-1/0)

Can you please explain what you mean? What should this changed to so
that it does work.

> Please be aware that in your config the ssl::server_name ACL is *not* 
> matching the SNI in your config.
> - Your ssl_bump rules say "peek all" - so peek happens on the two Hello
> messages. When the serverHello has been peek'd the real server name is
> available from the servers own certificate.

Updated Squid.conf.

# ===
# Squid 4.7 Config - Work in Progress
# ===

acl localnet src 10.0.0.0/8 # Kubernetes VPC CIDR range
acl SSL_ports port 443  # HTTPS
acl Safe_ports port 80   # HTTP
acl Safe_ports port 443 # HTTPS
acl CONNECT method CONNECT   # Traffic restriction
acl step1 at_step SslBump1  # Needed by ssl-bump

# ---
# Whitelist the following Domains
# ---
# FQDN - Try to use FQDN
acl domainIsWhitelisted ssl::server_name accounts.google.com

# --
# Wildcard
acl domainIsWhitelisted ssl::server_name --client-requested .googleapis.com
acl domainIsWhitelisted ssl::server_name --client-requested
.googleapis.l.google.com
# ---

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Passively Intercepted HTTPS Traffic
https_port 9091 cert=/etc/squid/example.com.cert
key=/etc/squid/example.com.private ssl-bump intercept
acl step1 at_step SslBump1
ssl_bump peek all
ssl_bump splice domainIsWhitelisted
ssl_bump terminate all

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Logging
logformat custom1 %tg %6tr %>a %Hs %sni %ssl::bump_mode Client(Subject/Tx/Neg/Sup/Cip):
%ssl::>cert_subject %ssl::>received_hello_version
%ssl::>negotiated_version %ssl::>received_supported_version
%ssl::>negotiated_cipher Server(Subject/Rx/Neg/Sup/Cip):
%ssl::http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with HTTPS SQUID 3.1.23

2019-06-26 Thread Anderson Rosario
I can not access to HTTPS sites, 3 weeks ago was working fine, without
doing any change in the topology update or config stopped and it is not
working with HTTPS sites. it keeps loading and I recieve a message from
navegators The connection to the server was reset while the page was
loading.

*here my squid config:*

#
# Recommended minimum configuration:


#
visible_hostname proxy.local.local

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/23 # RFC1918 possible internal network
acl localnet src 192.168.0.0/23
#acl localnet src fc00::/7   # RFC 4193 local private network range
#acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 85 # puerto agregado
acl Safe_ports port 883 # puerto agregado
acl Safe_ports port 5222 # puerto agregado
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_port 3128

# AD AUTH ###
auth_param basic program /usr/lib/squid/squid_ldap_auth -R -b
"dc=local,dc=LOCAL" -D "cn=squid,ou=proxy,dc=local,dc=LOCAL" -w "123456" -f
sAMAccountName=%s -h 192.168.0.213

auth_param basic children 5
auth_param basic realm Inserte su usuario de Windows para navegar
auth_param basic credentialsttl 1 hour

external_acl_type ldap_group %LOGIN /usr/lib/squid/squid_ldap_group -R -b
"dc=local,dc=LOCAL" -D "cn=squid,ou=proxy,dc=local,dc=LOCAL" -w "123456" -f
"(&(objectclass=person)
(sAMAccountName=%v)(memberof=cn=%a,ou=proxy,dc=local,dc=LOCAL))" -h
192.168.0.213
##

## ALCs que definen los grupos ##
acl nivel0 external ldap_group nivel0
acl nivel1 external ldap_group nivel1
acl nivel2 external ldap_group nivel2
acl nivel3 external ldap_group nivel3
acl nivel4 external ldap_group nivel4
acl nivel5 external ldap_group nivel5
acl nivel6 external ldap_group nivel6

#

## Custom ACLs ##
acl rule1 url_regex -i ars humano senasa universal arsuniversal google.com
google.com.do universal.com.do .tss.gov.do tss tss.gov.do banreservas
banreservas.com universal.com arshumano arshumano.com consultascuentas
consultascuentas.arshumano.com banreservas.com.do \.jpg$

acl rule2 dstdomain .facebook.com .youtube.com .rdmusica.com .
listindiario.com .diariolibre.com .hotmail.com .outlook.com .yahoo.com .
mlb.com .espn.com .bleacherreport.com .lamega.com .espn.go.com .
espndeportes.com mail.google.com .twitter.com .hi5.com .freakshare.com .
bitshare.com .seriespepito.com .seriales.com .cuevana.tv .rapidshare.com .
supercarros.com .chatango.com .blogger.com .videobb.com .gmail.com

acl rule3 dstdomain .youtube.com .mlb.com .espn.com .bleacherreport.com .
lamega.com .espn.go.com .espndeportes.com   seriespepito.com .
seriales.com .cuevana.tv .rapidshare.com .supercarros.com .chatango.com .
blogger.com .videobb.com .sex.com .xxx.com .facebook.com

acl desc1 url_regex -i \.avi$ \.mov$ \.rar$ \.qt$ \.mpe$ \.mpeg$ \.mpg$
\.ief$ \.wav$ \.mp3$ \.mp4$ \.tar$ \.rpm$ \.zip$ \.gtar$ \.exe$ \.movie$
\.midi$ \.mid$ \.kar$ \.java$ \.dir$ sex lesbian porn porno xxx

acl rule7 dstdomain .facebook.com .hotmail.com mail.google.com .gmail.com .
yahoo.com .yahoo.es accounts.google.com

acl desc7 url_regex -i accounts gmail mail accounts.google.com

acl desc2 url_regex -i \.avi$ \.mov$ \.rar$ \.qt$ \.mpe$ \.mpeg$ \.mpg$
\.jpe$ \.jpg$ \.jpeg$ \.ief$ \.bmp$ \.wav$ \.mp3$ \.mp4$ \.tar$ \.rpm$
\.zip$ \.gtar$ \.exe$ \.movie$ \.midi$ \.mid$ \.kar$ \.dir$ \.png$ sex
lesbian porn porno

acl desc3 url_regex -i \.avi$ \.mov$ \.qt$ \.ief$  \.wav$ \.mp3$ \.mp4$
\.tar$ \.rpm$ \.gtar$ \.exe$ \.movie$ \.midi$ \.mid$ \.kar$  \.dir$ \.bmp$
\.java$ \.png$ \.mpe$ \.mpeg$ \.mpg$  lesbian porn porno xxx

acl desc4 url_regex -i \.avi$ \.png$  \.java$ \.mpe$ \.mpeg$ \.mpg$ \.mov$
\.qt$  \.rpm$\.gtar$ \.exe$ \.movie$ \.dir$ \.rar$ sex lesbian porn porno
#

## Reglas de acceso ##

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


#
http_access allow nivel6
http_access allow nivel5
http_access allow nivel4
http_access allow nivel3 !rule3 !desc3 !rule7 !desc7
http_access allow nivel2 !rule2 !desc2 !rule7 !desc7
http_access deny nivel1 !rule1
http_access allow nivel1 !desc1 !rule7 !desc7
http_access deny nivel0
http_access deny all
##


# Recommended minimum Access Permission configuration:
#
# Only 

Re: [squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-26 Thread Amos Jeffries
On 26/06/19 2:45 pm, Jared Fox wrote:>
> == Bad news / Major Blocker ==
> https connections to cloud tracing is still being blocked, these are
> TLS 1.2 and uses SNI as seen via tcpdump.
>
Okay, now that you have the v4 capabilities:

* Please add %ssl::bump_mode to your log so we can see easily which
SSL-Bump step each transaction is representing. The
"cloudtrace.googleapis.com" ones all say 200 (success) so it is not
clear whether that is a successful peek, or successful terminate action.


Please be aware that in your config the ssl::server_name ACL is *not*
matching the SNI in your config.
- Your ssl_bump rules say "peek all" - so peek happens on the two Hello
messages. When the serverHello has been peek'd the real server name is
available from the servers own certificate.

 So that server cert name is what the ssl_server_name matches against
when checking the "splice domainIsWhitelisted" rule.

 The dozens of servers at cloudtrace.googleapis.com call themselves
"edgecert.googleapis.com" and have a long list of sub-domains for
googleapis.com.

 ==> I suggest changing domainIsWhitelisted to match just the
".googleapis.com" part of the domain.


Alternatively you can add the new "--client-requested" flag to the ACL,
which will force it to use the SNI even after more reliable info is
available. Like so:
  acl domainIsWhitelisted ssl::server_name \
 --client-requested cloudtrace.googleapis.com


If those do not work, then someone will need to dig down into the
cache.log debug trace of what the ssl_bump ACLs are matching against.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-25 Thread Jared Fox
Hi Amos / Squid-Users

So some good news and bad news and i'm still blocked.

== Good news ==
I have managed to get Squid 4.7 running on Centos 7.6.1810, with the
squid & squid-helpers binary rpms from
`http://www1.ngtech.co.il/repo/centos/$releasever/$basearch/`.

FYI: The squid-helpers rpm does not work in Amazon Linux 2 due to
incomplete dependencies. out of scope of this help request, as i'm not
concerned by this at the moment. It's a 3rd party rpm anyway.

The squid-helpers security_file_certgen, required a symlink to work as
the security_file_certgen is not in the default path. Symlink was
quicker than just updating PATH. `ln -s
/usr/lib64/squid/security_file_certgen
/usr/local/sbin/security_file_certgen`

Only squid.conf change (from what was previously listed) was to add:
http_port 3128

== Bad news / Major Blocker ==
https connections to cloud tracing is still being blocked, these are
TLS 1.2 and uses SNI as seen via tcpdump.

26/Jun/2019:02:23:13956 Kube-Node-Zone-B-IP 162.247.242.26
TCP_TUNNEL/200 3059 CONNECT 162.247.242.26:443
collector-001.newrelic.com HTTP/1.1
26/Jun/2019:02:23:14978 Kube-Node-Zone-B-IP 162.247.242.26
TCP_TUNNEL/200 3059 CONNECT 162.247.242.26:443
collector-001.newrelic.com HTTP/1.1
26/Jun/2019:02:23:16 95 Kube-Node-Zone-B-IP 216.58.199.74
NONE/200 0 CONNECT 216.58.199.74:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:16 96 Kube-Node-Zone-B-IP 216.58.199.42
NONE/200 0 CONNECT 216.58.199.42:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:16 94 Kube-Node-Zone-B-IP 172.217.167.106
NONE/200 0 CONNECT 172.217.167.106:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:16 95 Kube-Node-Zone-B-IP 172.217.167.74
NONE/200 0 CONNECT 172.217.167.74:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:16 94 Kube-Node-Zone-B-IP 172.217.25.170
NONE/200 0 CONNECT 172.217.25.170:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:16 96 Kube-Node-Zone-B-IP 172.217.25.138
NONE/200 0 CONNECT 172.217.25.138:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:17 94 Kube-Node-Zone-B-IP 216.58.203.106
NONE/200 0 CONNECT 216.58.203.106:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:17 96 Kube-Node-Zone-B-IP 216.58.200.106
NONE/200 0 CONNECT 216.58.200.106:443 cloudtrace.googleapis.com
HTTP/1.1
26/Jun/2019:02:23:17848 Kube-Node-Zone-B-IP 162.247.242.27
TCP_TUNNEL/200 3112 CONNECT 162.247.242.27:443
collector-001.newrelic.com HTTP/1.1
26/Jun/2019:02:23:18994 Kube-Node-Zone-B-IP 162.247.242.27
TCP_TUNNEL/200 3059 CONNECT 162.247.242.27:443
collector-001.newrelic.com HTTP/1.1
26/Jun/2019:02:23:19833 Kube-Node-Zone-B-IP 162.247.242.27
TCP_TUNNEL/200 3059 CONNECT 162.247.242.27:443
collector-001.newrelic.com HTTP/1.1
26/Jun/2019:02:23:20   1192 Kube-Node-Zone-B-IP 162.247.242.27
TCP_TUNNEL/200 3059 CONNECT 162.247.242.27:443
collector-001.newrelic.com HTTP/1.1

I really need to get Google Stackdriver Cloud Tracing working with
squid so am open to any advice / recommendations.

Kind regards

Jared Fox

DevOps Architect - Practiv
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-25 Thread Jared Fox
Thank you Amos

I will update the Squid config and give Squid-helpers 3.5 a go today
and let you know.

Do you have any idea why only some tls 1.2 connections would work with
the whitelisting.?

Thanks
Jared
DevOps Architect - Practiv

On Tue, Jun 25, 2019 at 9:04 PM Amos Jeffries  wrote:
>
> On 25/06/19 1:24 pm, Jared Fox wrote:
> > Hi Squid-Users
> >
> > I need your help!
> >
> > So i have had been using Squid 3.5.20 (installed on Amazon Linux 2)
> > and its acting as a transparent ssl proxy with whitelist of allowed
> > addresses. I want to avoid running a mitm proxy and having to add CA
> > certs to all services/containers etc. Traffic is routed to the squid
> > instance via a route-table to Interface.
> >
> > " Issue 1 - upgrade from 3.5.20 to 4.4.4 (squid-4.4-4.amzn2.0.4.x86_64) "
> >
> > - So my working config below does not work with 4.x but it kind of
> > does for 3.5.x and its appears that i require the squid-helper package
> > which doesn't exist for Amazon linux.
>
> You will have to contact whoever created the package for that.
>
> You should be able to run the v3.5 helpers with a later Squid - but will
> of course not gain any improvements that have been made in the later
> version helpers.
>
>
> > - When starting squid it tries to create an ssl database via
> > security_file_certgen, but this shouldnt be needed as i'm providing a
> > self-signed certs that doesnt get used in transparent mode but is a
> > hard dependency in 3.5.
>
> That is a bug, side effect of the helper being started even when not
> needed. As a workaround it should be sufficient to create the DB for the
> helper and leave it not being used.
>
> >
> > " Errors produced: "
> >
> > (security_file_certgen)2019/06/25 00:37:57 kid1| ERROR: No
> > forward-proxy ports configured.
> > 2019/06/25 00:37:57 kid1| ERROR: No forward-proxy ports configured.
>
> That is correct. You only have one port (9091) - which is an intercept port.
>
> At least one forward-proxy port is needed for a fully functional proxy.
> 3128 is the official one for that.
>
>
> > 2019/06/25 00:37:57 kid1| storeDirWriteCleanLogs: Starting...
> > : Uninitialized SSL certificate database directory:
> > /var/spool/squid/ssl_db. To initialize, run "security_file_certgen -c
> > -s /var/spool/squid/ssl_db".
> > 2019/06/25 00:37:57 kid1|   Finished.  Wrote 0 entries.
> > 2019/06/25 00:37:57 kid1|   Took 0.00 seconds (  0.00 entries/sec).
> > 2019/06/25 00:37:57 kid1| FATAL: mimeLoadIcon: cannot parse internal
> > URL: 
> > http://ip-10-0-60-70.ec2.internal:0/squid-internal-static/icons/silk/image.png
>
> Side effect of not having a forward-proxy port is that all URLs for
> things clients require fetching from Squid are invalid.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-25 Thread Amos Jeffries
On 25/06/19 1:24 pm, Jared Fox wrote:
> Hi Squid-Users
> 
> I need your help!
> 
> So i have had been using Squid 3.5.20 (installed on Amazon Linux 2)
> and its acting as a transparent ssl proxy with whitelist of allowed
> addresses. I want to avoid running a mitm proxy and having to add CA
> certs to all services/containers etc. Traffic is routed to the squid
> instance via a route-table to Interface.
> 
> " Issue 1 - upgrade from 3.5.20 to 4.4.4 (squid-4.4-4.amzn2.0.4.x86_64) "
> 
> - So my working config below does not work with 4.x but it kind of
> does for 3.5.x and its appears that i require the squid-helper package
> which doesn't exist for Amazon linux.

You will have to contact whoever created the package for that.

You should be able to run the v3.5 helpers with a later Squid - but will
of course not gain any improvements that have been made in the later
version helpers.


> - When starting squid it tries to create an ssl database via
> security_file_certgen, but this shouldnt be needed as i'm providing a
> self-signed certs that doesnt get used in transparent mode but is a
> hard dependency in 3.5.

That is a bug, side effect of the helper being started even when not
needed. As a workaround it should be sufficient to create the DB for the
helper and leave it not being used.

> 
> " Errors produced: "
> 
> (security_file_certgen)2019/06/25 00:37:57 kid1| ERROR: No
> forward-proxy ports configured.
> 2019/06/25 00:37:57 kid1| ERROR: No forward-proxy ports configured.

That is correct. You only have one port (9091) - which is an intercept port.

At least one forward-proxy port is needed for a fully functional proxy.
3128 is the official one for that.


> 2019/06/25 00:37:57 kid1| storeDirWriteCleanLogs: Starting...
> : Uninitialized SSL certificate database directory:
> /var/spool/squid/ssl_db. To initialize, run "security_file_certgen -c
> -s /var/spool/squid/ssl_db".
> 2019/06/25 00:37:57 kid1|   Finished.  Wrote 0 entries.
> 2019/06/25 00:37:57 kid1|   Took 0.00 seconds (  0.00 entries/sec).
> 2019/06/25 00:37:57 kid1| FATAL: mimeLoadIcon: cannot parse internal
> URL: 
> http://ip-10-0-60-70.ec2.internal:0/squid-internal-static/icons/silk/image.png

Side effect of not having a forward-proxy port is that all URLs for
things clients require fetching from Squid are invalid.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with transparent whitelisting proxy on Squid 4.4

2019-06-24 Thread Jared Fox
Hi Squid-Users

I need your help!

So i have had been using Squid 3.5.20 (installed on Amazon Linux 2)
and its acting as a transparent ssl proxy with whitelist of allowed
addresses. I want to avoid running a mitm proxy and having to add CA
certs to all services/containers etc. Traffic is routed to the squid
instance via a route-table to Interface.

" Issue 1 - upgrade from 3.5.20 to 4.4.4 (squid-4.4-4.amzn2.0.4.x86_64) "

- So my working config below does not work with 4.x but it kind of
does for 3.5.x and its appears that i require the squid-helper package
which doesn't exist for Amazon linux.
- When starting squid it tries to create an ssl database via
security_file_certgen, but this shouldnt be needed as i'm providing a
self-signed certs that doesnt get used in transparent mode but is a
hard dependency in 3.5.

" Errors produced: "

(security_file_certgen)2019/06/25 00:37:57 kid1| ERROR: No
forward-proxy ports configured.
2019/06/25 00:37:57 kid1| ERROR: No forward-proxy ports configured.
2019/06/25 00:37:57 kid1| storeDirWriteCleanLogs: Starting...
: Uninitialized SSL certificate database directory:
/var/spool/squid/ssl_db. To initialize, run "security_file_certgen -c
-s /var/spool/squid/ssl_db".
2019/06/25 00:37:57 kid1|   Finished.  Wrote 0 entries.
2019/06/25 00:37:57 kid1|   Took 0.00 seconds (  0.00 entries/sec).
2019/06/25 00:37:57 kid1| FATAL: mimeLoadIcon: cannot parse internal
URL: 
http://ip-10-0-60-70.ec2.internal:0/squid-internal-static/icons/silk/image.png
2019/06/25 00:37:57 kid1| Squid Cache (Version 4.4): Terminated abnormally.

" Squid config file contains: "

===
acl localnet src 10.0.0.0/8   # Kubernetes VPC CIDR range
acl SSL_ports port 443# HTTPS
acl Safe_ports port 80# HTTP
acl Safe_ports port 443   # HTTPS
acl CONNECT method CONNECT# Traffic restriction
acl step1 at_step SslBump1# Needed by ssl-bump

# ---
# Whitelist the following Domains
# ---

# Shorten whitelist - just for this email / Edited config here
acl domainIsWhitelisted ssl::server_name googleapis.l.google.com
acl domainIsWhitelisted ssl::server_name logging.googleapis.com
acl domainIsWhitelisted ssl::server_name cloudtrace.googleapis.com

# --

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Passively Intercepted HTTPS Traffic
https_port 9091 cert=/etc/squid/example.com.cert
key=/etc/squid/example.com.private ssl-bump intercept
acl step1 at_step SslBump1
ssl_bump peek all
ssl_bump splice domainIsWhitelisted
ssl_bump terminate all

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Logging
logformat custom1 %tg %6tr %>a %Hs %sni HTTP/%rv
access_log daemon:/var/log/squid/access_custom1.log custom1
access_log udp://127.0.0.1:5140
===

" Issue 2 "
- So the reason for the upgrade is that some TLS 1.2 are being blocked
when they should be whitelisted and it depends on the clients used, eg
Curl vs Netty, i believe this maybe due to unsupported tls extensions
but i can prove this as differences via tcpdump are minor.

It this because my configuration above it incorrect.

Kind regards

Jared Fox
DevOps Architect - Practiv
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with reverse proxy sending user to peer

2019-05-16 Thread Amos Jeffries
On 17/05/19 2:56 am, jmperrote wrote:
> 
> OK now I want to know it is posible to get or recover from the ldap an
> attribute for later deliver this attribute to the peer server on same
> way that I deliver on the header the username.

See


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with reverse proxy sending user to peer

2019-05-16 Thread jmperrote
Hello again Amos, finally on my reverse-proxy a could deliver to the 
upstream peer/server the data (username) that I need, using the directive


request_header_add X-Remote-User "%ul"

This is the user captured from authentication (%ul  User name) and 
validated for --> auth_param basic program auth.php


My helper auth.php go to a internal ldap for validate the user and the 
helper say OK/ERR how response.


OK now I want to know it is posible to get or recover from the ldap an 
attribute for later deliver this attribute to the peer server on same 
way that I deliver on the header the username.


Regards,



El 16/5/19 a las 07:28, jmperrote escribió:

Thanks a lot Amos, a try to use this for testing.


Regards.


El 16/5/19 a las 06:24, Amos Jeffries escribió:

On 16/05/19 3:26 am, jmperrote wrote:

Hello Amos, we use

--> auth_param basic program ./.../auth.php

for authenticate teh user to the reverse proxy.


auth_param is full HTTP authentication. So the %ul code is what you need
to use in your custom header value for username from that helper.


The %ue is for the external_acl_type helpers output. "user name" is
different from "username" - the single space may seem pedantic but with
security the minor distinction can mean vast differences in risk.

The label in %ue is authorized, but not guaranteed to be valid. Whereas
%ul is authenticated and thus guaranteed valid.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with reverse proxy sending user to peer

2019-05-16 Thread jmperrote

Thanks a lot Amos, a try to use this for testing.


Regards.


El 16/5/19 a las 06:24, Amos Jeffries escribió:

On 16/05/19 3:26 am, jmperrote wrote:

Hello Amos, we use

--> auth_param basic program ./.../auth.php

for authenticate teh user to the reverse proxy.


auth_param is full HTTP authentication. So the %ul code is what you need
to use in your custom header value for username from that helper.


The %ue is for the external_acl_type helpers output. "user name" is
different from "username" - the single space may seem pedantic but with
security the minor distinction can mean vast differences in risk.

The label in %ue is authorized, but not guaranteed to be valid. Whereas
%ul is authenticated and thus guaranteed valid.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with reverse proxy sending user to peer

2019-05-15 Thread Amos Jeffries
On 15/05/19 12:09 pm, jmperrote wrote:
> hello I need a help to know it is posible with squid to pass the
> username autenticated on reverse proxy to the peer ?
> 

Firstly, please be aware that the username you may see in proxy logs is
not required to be authenticated. In modern Squid it just has to be sent.


> 
> The idea is that the webserver aplication can catch like POST method or
> similar the username logued and autenticated on reverse proxy-
> 

You can use the request_header_add directive to add custom headers with
any information Squid has at the time those headers are generated for
delivery to the upstream peer/server.
  

But ... which username?

"
ul  User name from authentication
ue  User name from external acl helper
ui  User name from ident
un  A user name. Expands to the first available name
from the following list of information sources:
- authenticated user name, like %ul
- user name supplied by an external ACL, like %ue
- SSL client name, like %us
- ident user name, like %ui

  credentials   Client credentials. The exact meaning depends on
the authentication scheme: For Basic authentication,
it is the password; for Digest, the realm sent by the
client; for NTLM and Negotiate, the client challenge
or client credentials prefixed with "YR " or "KK ".
"


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help with reverse proxy sending user to peer

2019-05-14 Thread jmperrote
hello I need a help to know it is posible with squid to pass the 
username autenticated on reverse proxy to the peer ?


I have a reverse proxy, with external autentification type on ldap 
repository, once the user is validated on reverse
proxy and redirect to the peer, I need to send the username, o something 
that permit to the webserver server (peer) know

the username that was autenticated on squid.

The idea is that the webserver aplication can catch like POST method or 
similar the username logued and autenticated on reverse proxy-


regards-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HELP! Ssl_bump - acl , dstdomain , denied by fqdn need ip

2019-01-25 Thread Alex Rousskov
On 1/25/19 1:15 AM, Александр Александрович Березин wrote:

> 0 192.168.50.10 TCP_DENIED/200 0 CONNECT 208.64.202.87:443 - HIER_NONE/- -

Looks like your http_access rules deny some (or all) CONNECT requests,
probably during SslBump step1. This is not related to your ssl_bump
rules. Examine those rules and adjust them to allow CONNECT requests you
want to allow (and deny all other CONNECT requests).


> acl test dstdomain partner.steam-api.com

I doubt this causes TCP_DENIED errors, but you may want to use an
ssl::server_name ACL instead of dstdomain.


HTH,

Alex.


> [Fri Jan 25 06:50:10 2019].516      0 192.168.50.10 TCP_DENIED/200 0
> CONNECT 208.64.202.87:443 - HIER_NONE/- -
> [Fri Jan 25 06:50:10 2019].530      0 192.168.50.10 TCP_DENIED/200 0
> CONNECT 208.64.202.87:443 - HIER_NONE/- -
> [Fri Jan 25 06:50:10 2019].537      0 192.168.50.10 TAG_NONE/403 3806
> GET https://partner.steam-api.com/ - HIER_NONE/- text/html
> [Fri Jan 25 06:50:10 2019].568      0 192.168.50.10 TCP_DENIED/200 0
> CONNECT 208.64.202.87:443 - HIER_NONE/- -
> [Fri Jan 25 06:50:10 2019].576      0 192.168.50.10 TCP_DENIED/200 0
> CONNECT 208.64.202.87:443 - HIER_NONE/- -
> [Fri Jan 25 06:50:10 2019].583      0 192.168.50.10 TAG_NONE/403 3806
> GET http://berezin:0/squid-internal-static/icons/SN.png - HIER_NONE/-
> text/html
>  
> in browser i have are error
>  
> squid error the requested url could not be retrieved
> the following error was encountered while trying to retrieve the url
> https://208.64.202.87 
>  
> if i add 208.64.202.87  in acl test dstdomain
> everything is good and I connect to partner.steam-api.com
>  
>  
> but the address at the end partner.steam-api.com  can be dynamic and
> constantly changing, so I need a connection by name
> tell me what is my mistake?
>  
> -- 
> С Уважением,
> Александр Александрович Березин
>  
> With respect,
> Alexander Alexandrovich Berezin
>  
>  
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HELP! Ssl_bump - acl , dstdomain , denied by fqdn need ip

2019-01-25 Thread Amos Jeffries
On 25/01/19 9:15 pm, Александр Александрович Березин wrote:
> Please HELP!
>  
> Hello dear members of the community
> excuse me for disturbing me, but I could not find an answer to the
> question, so I speak to you, sorry again
>  
> i have
>  
...
> 
> in /etc/squid.conf
> 
> ...
> 
> acl test dstdomain partner.steam-api.com
>  
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
>  
> ssl_bump peek step1 all

NP: That 'all' has no purpose here.

> ssl_bump splice test

The ssl_bump rules when checked for intercepted traffic are run *before*
anything gets decrypted. Thus there is no HTTP(S) request to get a URL
from, so no URL domain (dstdomain).

Use ssl::server_name ACL type instead. It can match TLS SNI domain (if
any) retrieved by the step1 peek action.


> ssl_bump bump
>  
>  
> http_port 192.168.50.1:3128 intercept
> https_port 192.168.50.1:3129 intercept ssl-bump
> options=ALL:NO_SSLv3:NO_SSLv2 connection-auth=off
> cert=/etc/squid/ssl_cert/squidCA.pem
>  
>  
>  
> when I am trying to access the site from a browser from a local network
> partner.steam-api.com
>  
> access.log
>  
> [Fri Jan 25 06:50:10 2019].514      0 192.168.50.10 TCP_DENIED/200 0
> CONNECT 208.64.202.87:443 - HIER_NONE/- -

Traffic arriving is immediately being denied access into the proxy. The
other log entries and errors are resulting from that fact.

>  
> but the address at the end partner.steam-api.com  can be dynamic and
> constantly changing, so I need a connection by name
> tell me what is my mistake?

Two mistakes. First is the dstdomain vs ssl::server_name ACL types
mentioned above.

Second mistake is http_access rules deny'ing CONNECT messages generated
by Squid to represent the TCP SYN packet for SSL-Bump step1. At that
point all Squid has access to is the raw-IP:port details. SNI where the
server name is received requires the initial CONNECT to be allowed into
the proxy before the TLS inspection can begin.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] HELP! Ssl_bump - acl , dstdomain , denied by fqdn need ip

2019-01-25 Thread Александр Александрович Березин
Please HELP! Hello dear members of the communityexcuse me for disturbing me, but I could not find an answer to the question, so I speak to you, sorry again i have #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/LinuxNo LSB modules are available.Distributor ID: UbuntuDescription:    Ubuntu 18.04.1 LTSRelease:        18.04Codename:       bionic # squid -v Squid Cache: Version 3.5.27Service Name: squidUbuntu linux This binary uses OpenSSL 1.0.2n  7 Dec 2017. For legal restrictions on distribution see https://www.openssl.org/source/license.html  '--enable-ssl' '--enable-ssl-crtd' '--with-openssl'   in /etc/squid.conf...acl test dstdomain partner.steam-api.com acl step1 at_step SslBump1acl step2 at_step SslBump2acl step3 at_step SslBump3 ssl_bump peek step1 allssl_bump splice testssl_bump bump  http_port 192.168.50.1:3128 intercepthttps_port 192.168.50.1:3129 intercept ssl-bump options=ALL:NO_SSLv3:NO_SSLv2 connection-auth=off cert=/etc/squid/ssl_cert/squidCA.pem   when I am trying to access the site from a browser from a local networkpartner.steam-api.com access.log [Fri Jan 25 06:50:10 2019].514      0 192.168.50.10 TCP_DENIED/200 0 CONNECT 208.64.202.87:443 - HIER_NONE/- -[Fri Jan 25 06:50:10 2019].516      0 192.168.50.10 TCP_DENIED/200 0 CONNECT 208.64.202.87:443 - HIER_NONE/- -[Fri Jan 25 06:50:10 2019].530      0 192.168.50.10 TCP_DENIED/200 0 CONNECT 208.64.202.87:443 - HIER_NONE/- -[Fri Jan 25 06:50:10 2019].537      0 192.168.50.10 TAG_NONE/403 3806 GET https://partner.steam-api.com/ - HIER_NONE/- text/html[Fri Jan 25 06:50:10 2019].568      0 192.168.50.10 TCP_DENIED/200 0 CONNECT 208.64.202.87:443 - HIER_NONE/- -[Fri Jan 25 06:50:10 2019].576      0 192.168.50.10 TCP_DENIED/200 0 CONNECT 208.64.202.87:443 - HIER_NONE/- -[Fri Jan 25 06:50:10 2019].583      0 192.168.50.10 TAG_NONE/403 3806 GET http://berezin:0/squid-internal-static/icons/SN.png - HIER_NONE/- text/html in browser i have are error squid error the requested url could not be retrievedthe following error was encountered while trying to retrieve the url https://208.64.202.87 if i add 208.64.202.87 in acl test dstdomaineverything is good and I connect to partner.steam-api.com  but the address at the end partner.steam-api.com  can be dynamic and constantly changing, so I need a connection by nametell me what is my mistake? -- С Уважением,Александр Александрович Березин With respect,Alexander Alexandrovich Berezin  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-10-01 Thread neok
Hi Eliezer, I apologize! I don't know why I stopped receiving emails from the
squid users list.
Only today I see the thread in nabble.com and I see that it has 23 posts!

Regarding your question, I didn't investigate the error of squidGuard... I
started to migrate my lists to native squid lists as Amos recommended. I
really thought it was the best option. Of course it took work, but the
configuration is cleaner and faster in my opinion. 
There are other posts in which I share my configuration if you want to see
it.

Best regards...

Gabriel




--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-29 Thread Eliezer Croitoru
Hey Gabriel,

 

The thread seems to me as a milestone in this mailing list and in Squid-Cache 
history.

>From what I understood there is an issue when SquidGuard receives a specific 
>line from Squid.

In this whole long thread I have not seen any debug logs of what SquidGuard 
receives from Squid.

It’s crucial to understand what the issue is and why it happens regardless to 
whether SquidGuard is old or not.

Also it’s not related to an ICAP service or URL rewrite or external acl…

I do not remember by heart what debug log section is relevant but Amos and Alex 
should be able to direct us towards these.

When you will have the exact line that the url_rewrite helper receives we would 
be able to know and maybe understand some details.

For some admins this kind of setup is easy but.. any LDAP\NTLM\Kerberos related 
setup needs to be tested and I believe this is where you are at.

There is a possibility that SquidGuard as a url_rewrite helper doesn’t receive 
the relevant details it expects such as username or group.

The above can cause this issue.

 

If you can share with us the relevant line that SquidGuard receives and crashes 
it would help other admins who have yet to encounter it.

 

Eliezer

· I have a setup of above 800 users but… the cache features are tuned 
off and it’s only working for ACL checking.

 

 



 <http://ngtech.co.il/lmgtfy/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Service MV
Sent: Monday, September 17, 2018 18:38
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Help: squid restarts and squidGuard die

 

Dear Ones, I draw on your experience in seeking help to determine whether or 
not it is possible to achieve the configuration I am looking for, due to a 
strange error I am having.

 

Before commenting on the bug I describe my testing environment:

- A VM CentOS 7 Core over VirtualBox 5.2, 1 NIC.

- My VM is attached to my domain W2012R2 (following this post 
https://www.rootusers.com/how-to-join-centos-linux-to-an-active-directory-domain/)
 to achieve kerberos authentication transparent to the user. SElinux disabled. 
Owner permissions to user squid in all folders/files involved.

- squid 3.5.20 installed and working great with kerberos, NTLM and basic 
authentication. All authentication mechanisms tested and working great.

- SquidGuard: 1.4 Berkeley DB 5.3.21 installed and working great with 
blacklists and acl default.

 

My problem starts when I try to use source acl using ldapusersearch in 
squidGuard... 

 

systemctl status squid:

(squid-1)[12627]: The redirector helpers are crashing too rapidly, need help!



 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-24 Thread Amos Jeffries
On 25/09/18 7:07 AM, Marcus Kool wrote:
> The sub-thread starts with "do not use the url rewriter helper because
> of complexity"

The thread started earlier than that, with essentially "move simple
rules to squid.conf"

On 18/09/18 6:38 AM, Amos Jeffries wrote:
>
> I recommend you convert as many of your filtering rules as you can into
> normal Squid ACLs. Traffic which is being blocked for simple reasons can
> be done much more efficiently by Squid than a helper.
>

The statement about the helper being complex came later after a
misunderstanding by the OP about what the tools were used for.

You are paraphrasing in a way which changes the meaning of my actual
statement. I was clearly and explicitly advising the OP to work towards
"less complexity" and pointing out that the helper (any helper) is
complex and to be avoided when a simpler solution is also available.


> and ends with that the (not less complex) external acl helpers are fine
> to use.

They are ... when needed. Having them do everything from src-IP check to
re-authenticating a login Squid already authenticated passed it is
needless extra complexity as a long-term solution.


> And in between there is an attempt to kill the URL rewriter interface.
> 

No, just the use of the rewriters for access control. In the context of
an OP who is using a rewriter for a fairly simple set of blacklist and
whitelist of traffic - which got diverted into a debate of Squid vs
re-writer feature comparisons.

You brought up the topic of removing the interface. As I responded then,
there are still use-cases for it. Just, access control is not one of
those cases.


> It would be a lot less confusing if you started with something like
>    I do not like the URL rewriter interface, use the external acl one
> 

That would be only a small amount better (improvement in principle, no
longer destructive for the state lost when re-writing - still complex in
practice). I am pointing the OP at something that should work a bit
better than even that semi-theoretical improvement. They may or may not
end up with a helper still being used, but either way re-assessing this
1980's style config will improve their situation for modern traffic.


>>> ufdbGuard supports dynamic lists of users, domains and source ip
>>> addresses which are updated every X minutes without any service
>>> interruption.
>>
>> So does Squid, via external ACL and/or authentication.
> 
> Aren't you confusing what Squid itself and what Squid+helpers can do?

There is crossover. Though we are delving into realms of principle here.
The data available to the helper running on the URL-rewrite interface is
quite limited - the other interfaces (external ACL in particular) have
wider scope and much more flexibility in what Squid can do with them.

For example SG and ufdbguard may be able to load dynamic lists of users,
but cannot make Squid generate authentication challenge with the correct
parameters to authenticate those users. They can only re-check an
already authenticated username (without access to the password details)
or rewrite/redirect to a third-party server that does so.
 Whereas looking up users in some "dynamic list" without needing a
reconfigure of Squid is pretty much the essence of what auth user/group
helpers do. It is rare to find a never-changing list of users.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-24 Thread Amos Jeffries
On 25/09/18 3:46 AM, Donald Muller wrote:
> I will be downloading the blacklists from the internet and I'm sure that there
> will be sites that I want to whitelist via
> 
> acl whitelist dstdomain "/some folder path/whitelist.acl"
> http_access allow whitelist
> 
> What logging do I need to enable to capture when a site I am trying to access 
> is blacklisted so I can add it to the whitelist?
> 

When your access.log contains DENIED/403 as the transaction status and
no server details it was denied by your policy.

NP: If all you are doing is adding blocked sites to a whitelist, then
its pointless doing the block at all. The best solution there is to
remove the blacklist entirely.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-24 Thread Marcus Kool

The sub-thread starts with "do not use the url rewriter helper because of 
complexity"
and ends with that the (not less complex) external acl helpers are fine to use.
And in between there is an attempt to kill the URL rewriter interface.

It would be a lot less confusing if you started with something like
   I do not like the URL rewriter interface, use the external acl one

>> ufdbGuard supports dynamic lists of users, domains and source ip
>> addresses which are updated every X minutes without any service
>> interruption.
>
> So does Squid, via external ACL and/or authentication.

Aren't you confusing what Squid itself and what Squid+helpers can do?

Marcus
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-24 Thread Donald Muller
I will be downloading the blacklists from the internet and I'm sure that there
will be sites that I want to whitelist via

acl whitelist dstdomain "/some folder path/whitelist.acl"
http_access allow whitelist

What logging do I need to enable to capture when a site I am trying to access 
is blacklisted so I can add it to the whitelist?

Thanks

> -Original Message-
> From: squid-users  On Behalf
> Of Donald Muller
> Sent: Friday, September 21, 2018 1:18 PM
> To: Amos Jeffries ; squid-users@lists.squid-
> cache.org
> Subject: Re: [squid-users] Help: squid restarts and squidGuard die
> 
> 
> 
> > -Original Message-
> > From: squid-users  On
> > Behalf Of Amos Jeffries
> > Sent: Thursday, September 20, 2018 3:50 PM
> > To: squid-users@lists.squid-cache.org
> > Subject: Re: [squid-users] Help: squid restarts and squidGuard die
> >
> > On 21/09/18 3:46 AM, Donald Muller wrote:
> > >
> > >> -Original Message-
> > >> From: Matus UHLAR - fantomas
> > >> Sent: Thursday, September 20, 2018 7:16 AM
> > >>
> > >> On 19.09.18 20:47, Donald Muller wrote:
> > >>> So instead of using squidguard are you saying  you should use
> > >>> something
> > >> like the following?
> > >>>
> > >>> acl ads dstdomain -i "/etc/squid/squid-ads.acl"
> > >>> acl adult dstdomain -i "/etc/squid/squid-adult.acl"
> > >>>
> > >>> http_access deny ads
> > >>> http_access deny adult
> > >>>
> > >>> Do the lists need to be sorted in alphabetical order?
> > >>
> > >> I don't think so - the lists are parsed to in -memory format for
> > >> faster processing.
> > >>
> > >
> > > Does Squid monitor dstdomain files for changes and reload them or
> > > does a
> > '-k reconfigure' need to be issued?
> > >
> >
> > Not currently. I'm looking for a nice portable way to do file watching.
> >
> > The Linux inotify system can apparently be used to send the -k
> > reconfigure command on FS changes in the config directory. Though I've
> > yet to see any working example and have not had the time myself to
> experiment on it.
> >
> > Patches and/or info welcome. This might be a good starter project if
> > anyone wants to dip their fingers into the Squid code.
> >
> 
> I will be downloading the blacklists from the internet and I'm sure that there
> will be sites that I want to whitelist via
> 
> acl whitelist dstdomain "/some folder path/whitelist.acl"
> http_access allow whitelist
> 
> What logging do I need to enable to capture when a domain is blacklisted?
> 
> 
> > Amos
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-21 Thread Donald Muller


> -Original Message-
> From: squid-users  On Behalf
> Of Amos Jeffries
> Sent: Thursday, September 20, 2018 3:50 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Help: squid restarts and squidGuard die
> 
> On 21/09/18 3:46 AM, Donald Muller wrote:
> >
> >> -Original Message-
> >> From: Matus UHLAR - fantomas
> >> Sent: Thursday, September 20, 2018 7:16 AM
> >>
> >> On 19.09.18 20:47, Donald Muller wrote:
> >>> So instead of using squidguard are you saying  you should use
> >>> something
> >> like the following?
> >>>
> >>> acl ads dstdomain -i "/etc/squid/squid-ads.acl"
> >>> acl adult dstdomain -i "/etc/squid/squid-adult.acl"
> >>>
> >>> http_access deny ads
> >>> http_access deny adult
> >>>
> >>> Do the lists need to be sorted in alphabetical order?
> >>
> >> I don't think so - the lists are parsed to in -memory format for
> >> faster processing.
> >>
> >
> > Does Squid monitor dstdomain files for changes and reload them or does a
> '-k reconfigure' need to be issued?
> >
> 
> Not currently. I'm looking for a nice portable way to do file watching.
> 
> The Linux inotify system can apparently be used to send the -k reconfigure
> command on FS changes in the config directory. Though I've yet to see any
> working example and have not had the time myself to experiment on it.
> 
> Patches and/or info welcome. This might be a good starter project if anyone
> wants to dip their fingers into the Squid code.
> 

I will be downloading the blacklists from the internet and I'm sure that there 
will be sites that I want to whitelist via

acl whitelist dstdomain "/some folder path/whitelist.acl"
http_access allow whitelist

What logging do I need to enable to capture when a domain is blacklisted?


> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Alex Rousskov
On 09/20/2018 02:41 PM, Amos Jeffries wrote:

> Squid does not close or break any client connections when reconfigured.

IIRC, this statement is inaccurate (unfortunately): Reconfiguring Squid
may break client connections that Squid has not started processing yet.
The connections already being processed by Squid are not closed, but the
new/arriving ones may be rejected for a short time period. Such
rejections may affect clients in some environments. This is a bug, so I
hope it will get fixed.

This correction does not affect the rewriter-vs-ACLs comparison, but I
wanted to make it in case that statement is used outside its context.


Cheers,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Alex Rousskov
On 09/20/2018 01:50 PM, Amos Jeffries wrote:
> On 21/09/18 3:46 AM, Donald Muller wrote:


>> Does Squid monitor dstdomain files for changes and reload them or does a '-k 
>> reconfigure' need to be issued?


> Not currently. I'm looking for a nice portable way to do file watching.

> Patches and/or info welcome. This might be a good starter project if
> anyone wants to dip their fingers into the Squid code.

... but please start with an RFC on squid-dev before writing any Squid
code. Implementing correct file watching support in Squid is not
trivial, and the feature itself may not be such a good idea. Please
discuss your plans before spending time on modifying Squid.


Thank you,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Amos Jeffries
On 21/09/18 3:46 AM, Marcus Kool wrote:
> 
> On 20/09/18 08:46, Amos Jeffries wrote:
>> On 19/09/18 11:49 PM, Marcus Kool wrote:
>>>
>>> On 18/09/18 23:03, Amos Jeffries wrote:
 On 19/09/18 1:54 AM, neok wrote:
> Thank you very much Amos for putting me in the right direction.
> I successfully carried out the modifications you indicated to me.
> Regarding ufdbGuard, if I understood correctly, what you recommend is
> to use
> the ufdbConvertDB tool to convert my blacklists in plain text to the
> ufdbGuard database format? And then use that/those databases in
> normal squid
> ACL's?

 No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
 replacement which works better while you improve your config.

 You should work towards less complexity. Squid / squid.conf is where
 HTTP access control takes place. The helper is about re-writing the URL
 (only) - which is a complex and destructive process.
>>>
>>> ufdbGuard is a simple tool that has the same syntax in its configuration
>>> file as squidGuard has.
>>> It is far from complex, has a great Reference Manual, exmaple config
>>> file and a responsive support desk.
>>> Amos, I have never seen you calling a URL writer being a complex and
>>> destructive process.  What do you mean?
>>
>> Re-writing requires Squid to:
>>   * fork external helpers, and
>>   * maintain queues of lookups to those helpers, and
>>   * maintain cache of helper responses, and
>>   * maintain a whole extra copy of HTTP-request state, and
>>   * copy some (not all) of that state info between the two "client"
>> requests.
>>
>>   ... lots of complexity, memory, CPU time, traffic latency, etc.
> 
> Squid itself is complex and for any feature of Squid one can make a list
> like above to say that it is complex.
> The fact that one can make such a list does not mean much to me.
> One can make the same or a similar list for external acl helpers and
> even native acls.
> 
>> Also when used for access control (re-write to an "error" URL) the
>> re-write helper needs extra complexity in itself to act as the altered
>> origin server for error pages, or have some fourth-party web server.
> 
> Squid cannot do everything that a URL writer, and specifically
> ufdbGuard, can.
> For example, Squid must restart and break all open connections when a
> tiny detail of the configuration changes.  With ufdbGuard this does not
> happen.


Squid does not close or break any client connections when reconfigured.
Squid pauses active transactions, reconfigures then continues with the
new config.

Are you perhapse mistaking the fact that Squid shuts down the
*rewriters* on reconfigure for a full Squid shutdown?

(hmm, there is another downside to placing all the access control in a
helper - waiting for the helpers to restart on config changes. Though as
you say ufdbguard does it efficiently, others do not).



> ufdbGuard supports dynamic lists of users, domains and source ip
> addresses which are updated every X minutes without any service
> interruption.

So does Squid, via external ACL and/or authentication.


> When other parameters change, ufdbGuard resets itself with zero service
> interruption for Squid and its users.

This is not always true. If the helper pauses even for some milliseconds
it is holding up Squid and clients. Particularly if it is a bottleneck
process like URL-rewrite interface where the helper lookup queue limits
total traffic capacity of the entire proxy.

I think you mean that the helper has threading to do a load in the
background and swap in the config. Correct?

Squid is working (very slowly) towards that model and the SMP features
already reconfigure one worker at a time sequentially so effectively
there should always be a helper with either old or new config answering
incoming traffic while one "resets itself".


> ufdbGuard can decide to probe a site to make a decision, and hence
> detect Skype, Teamviewer and other types of sites that an admin might
> want to block.  Squid cannot.

Squid can, via external ACL. IIRC, Eliezer wrote an ICAP system that did
that too.

Also, the URL-rewrite helper cannot do anything if Squid cannot pass it
a URL. By nature of what the interface is designed to do.


> ufdbGuard can decide to do a lookup of a reverse IP lookup to make a
> decision.  Squid cannot.

Squid can via external ACL.

We have not had much (any?) requests for an ACL doing that. Patches welcome.


> ufdbGuard supports complex time restrictions for access. Squid support
> simple time restrictions.

Such as?

Squid supports complex time points and/or ranges. The time ACL is a
bitmap extending at 1 second intervals across an entire week. Further
extension is done with external ACL, note ACL and/or allof ACL.


> ufdbGuard supports flat file domain/url lists and a commercial URL
> database.  Squid does not.
> And the list goes on.

I am still looking for a feature Squid does not actually support in one
way or 

Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Amos Jeffries
On 21/09/18 3:46 AM, Donald Muller wrote:
> 
>> -Original Message-
>> From: Matus UHLAR - fantomas
>> Sent: Thursday, September 20, 2018 7:16 AM
>>
>> On 19.09.18 20:47, Donald Muller wrote:
>>> So instead of using squidguard are you saying  you should use something
>> like the following?
>>>
>>> acl ads dstdomain -i "/etc/squid/squid-ads.acl"
>>> acl adult dstdomain -i "/etc/squid/squid-adult.acl"
>>>
>>> http_access deny ads
>>> http_access deny adult
>>>
>>> Do the lists need to be sorted in alphabetical order?
>>
>> I don't think so - the lists are parsed to in -memory format for faster
>> processing.
>>
> 
> Does Squid monitor dstdomain files for changes and reload them or does a '-k 
> reconfigure' need to be issued?
> 

Not currently. I'm looking for a nice portable way to do file watching.

The Linux inotify system can apparently be used to send the -k
reconfigure command on FS changes in the config directory. Though I've
yet to see any working example and have not had the time myself to
experiment on it.

Patches and/or info welcome. This might be a good starter project if
anyone wants to dip their fingers into the Squid code.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Marcus Kool



On 20/09/18 08:46, Amos Jeffries wrote:

On 19/09/18 11:49 PM, Marcus Kool wrote:


On 18/09/18 23:03, Amos Jeffries wrote:

On 19/09/18 1:54 AM, neok wrote:

Thank you very much Amos for putting me in the right direction.
I successfully carried out the modifications you indicated to me.
Regarding ufdbGuard, if I understood correctly, what you recommend is
to use
the ufdbConvertDB tool to convert my blacklists in plain text to the
ufdbGuard database format? And then use that/those databases in
normal squid
ACL's?


No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
replacement which works better while you improve your config.

You should work towards less complexity. Squid / squid.conf is where
HTTP access control takes place. The helper is about re-writing the URL
(only) - which is a complex and destructive process.


ufdbGuard is a simple tool that has the same syntax in its configuration
file as squidGuard has.
It is far from complex, has a great Reference Manual, exmaple config
file and a responsive support desk.
Amos, I have never seen you calling a URL writer being a complex and
destructive process.  What do you mean?


Re-writing requires Squid to:
  * fork external helpers, and
  * maintain queues of lookups to those helpers, and
  * maintain cache of helper responses, and
  * maintain a whole extra copy of HTTP-request state, and
  * copy some (not all) of that state info between the two "client" requests.

  ... lots of complexity, memory, CPU time, traffic latency, etc.


Squid itself is complex and for any feature of Squid one can make a list like 
above to say that it is complex.
The fact that one can make such a list does not mean much to me.
One can make the same or a similar list for external acl helpers and even 
native acls.


Also when used for access control (re-write to an "error" URL) the
re-write helper needs extra complexity in itself to act as the altered
origin server for error pages, or have some fourth-party web server.


Squid cannot do everything that a URL writer, and specifically ufdbGuard, can.
For example, Squid must restart and break all open connections when a tiny 
detail of the configuration changes.  With ufdbGuard this does not happen.
ufdbGuard supports dynamic lists of users, domains and source ip addresses 
which are updated every X minutes without any service interruption.
When other parameters change, ufdbGuard resets itself with zero service 
interruption for Squid and its users.
ufdbGuard can decide to probe a site to make a decision, and hence detect 
Skype, Teamviewer and other types of sites that an admin might want to block.  
Squid cannot.
ufdbGuard can decide to do a lookup of a reverse IP lookup to make a decision.  
Squid cannot.
ufdbGuard supports complex time restrictions for access. Squid support simple 
time restrictions.
ufdbGuard supports flat file domain/url lists and a commercial URL database.  
Squid does not.
And the list goes on.

So when you state on the mailing list that users should unconditionally stop using a URL writer in favor of using Squid acls, you may be causing troubles for admins who do not know the implications of 
your advice.




URL rewriters have been used for decades for HTTP access control but you
state "squid.conf is where HTTP access control takes place".


Once upon a time, back at the dawn of the WWW (before the 1990s) Squid
lacked external_acl_type and modular ACLs.

That persisted for the first decade or so of Squid's life, with only the
re-write API for admin to use for complicated permissions.

Then one day about 2 decades or so ago, external ACL was added and the
ACLs were also made much easier to implement and plug in new checks.
Today we have hundreds of native ACLs and even a selection of custom ACL
helpers. Making the need for these abuses of the poor re-writers.

Old habits and online tutorials however are hard to get rid of.


If you want to get rid of habits that in your view are old/obsolete, then why 
not start a discussion?
And in the event that at the end of the discussion, the decision is made that a 
particular interface should be removed, why not phase it out ?


Are you saying that you want it is the _only_ place for HTTP access
control?



I'm saying the purpose of the url_rewrite_* API in Squid is to tell
Squid whether the URL (only) needs some mangling in order for the
server/origin to understand it.
  It can re-write transparently with all the problems that causes to
security scopes and URL sync between the endpoints. Or redirect the
client to the "correct" URL.


The Squid http_access and similar *access controls* are the place for
access control - hint is in the naming. With external ACL type for
anything Squid does not support natively or well. As Flashdown mentioned
even calls to SquidGuard etc. can be wrapped and used as external ACLs.


Wrapping and externals ACLs adds the same complexity, memory, CPU time, traffic 
latency, etc that you use as an argument against a 

Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Donald Muller


> -Original Message-
> From: squid-users  On Behalf
> Of Matus UHLAR - fantomas
> Sent: Thursday, September 20, 2018 7:16 AM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Help: squid restarts and squidGuard die
> 
> On 19.09.18 20:47, Donald Muller wrote:
> >So instead of using squidguard are you saying  you should use something
> like the following?
> >
> >acl ads dstdomain -i "/etc/squid/squid-ads.acl"
> >acl adult dstdomain -i "/etc/squid/squid-adult.acl"
> >
> >http_access deny ads
> >http_access deny adult
> >
> >Do the lists need to be sorted in alphabetical order?
> 
> I don't think so - the lists are parsed to in -memory format for faster
> processing.
> 

Does Squid monitor dstdomain files for changes and reload them or does a '-k 
reconfigure' need to be issued?

> The case where sw like ufdbguard is important is where you use regular
> expressions like url_regex (but srcdom_regex and dstdom_regex may neet it
> too).
> 
> Processing of those is very inefficient inside of squid.
> 
> 
> >> -Original Message-
> >> From: squid-users  On
> >> Behalf Of Amos Jeffries
> >> Sent: Tuesday, September 18, 2018 10:04 PM
> >> To: squid-users@lists.squid-cache.org
> >> Subject: Re: [squid-users] Help: squid restarts and squidGuard die
> >>
> >> On 19/09/18 1:54 AM, neok wrote:
> >> > Thank you very much Amos for putting me in the right direction.
> >> > I successfully carried out the modifications you indicated to me.
> >> > Regarding ufdbGuard, if I understood correctly, what you recommend
> >> > is to use the ufdbConvertDB tool to convert my blacklists in plain
> >> > text to the ufdbGuard database format? And then use that/those
> >> > databases in normal squid ACL's?
> >>
> >> No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
> >> replacement which works better while you improve your config.
> >>
> >> You should work towards less complexity. Squid / squid.conf is where
> >> HTTP access control takes place. The helper is about re-writing the
> >> URL
> >> (only) - which is a complex and destructive process.
> 
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> Linux is like a teepee: no Windows, no Gates and an apache inside...
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Flashdown

I'm saying the purpose of the url_rewrite_* API in Squid is to tell
Squid whether the URL (only) needs some mangling in order for the
server/origin to understand it.
 It can re-write transparently with all the problems that causes to
security scopes and URL sync between the endpoints. Or redirect the
client to the "correct" URL.


The Squid http_access and similar *access controls* are the place for
access control - hint is in the naming. With external ACL type for
anything Squid does not support natively or well. As Flashdown 
mentioned

even calls to SquidGuard etc. can be wrapped and used as external ACLs.



Just want to add, in the beginning I thought about using a wrapper or 
writing one but as I found out during testing during these time, 
SquidGuard gives back the right responses to Squid, so a wrapper was not 
needed, and the rewrite adding in such a respone is simply ignored by 
Squid and it works like a charm, hope ufdbguard can be used as external 
acl helper natively as well. My config line:
external_acl_type squidguard ipv4 concurrency=0 children-max=XXX 
children-startup=XX ttl=60 %URI %SRC %{-} %un %METHOD 
/usr/bin/squidGuard


Taken out from my internal documentation:

"Manual testing:

echo "website.com 10.0.0.1/ - - GET" | squidGuard

Explaination of Responses:

ERR tells us: The access was not denied by Squidguard, so wether its 
not part of the blacklists or it is listed in the whitelist
BH message=“squidGuard error parsing squid line” tells us: there was 
an error when checking your input, may you had a syntax error or there 
is an issue in SquidGuard, the message param gives more insight.
OK rewrite-url=“https://127.0.0.1/” tells us: the item was found on 
the blacklists and is blocked. BTW Squid only sees the OK and ignores 
the rewrite command, since we didn't integrate it as an URL-rewrite 
program which would have many disadvantages.


PS: This is just how an external ACL Helper for Squid must work/respond. 
So Squid only takes ERR and BH including the message and OK. Thats why I 
was able to implement it this way without writing a wrapper for it. "


Hope it helps and hope I can do the same with ufdbguard, the SquidGuard 
Version I use is the latest one from the official Debian Repositories.




---
Best regards,
Flashdown
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Amos Jeffries
On 19/09/18 11:49 PM, Marcus Kool wrote:
> 
> On 18/09/18 23:03, Amos Jeffries wrote:
>> On 19/09/18 1:54 AM, neok wrote:
>>> Thank you very much Amos for putting me in the right direction.
>>> I successfully carried out the modifications you indicated to me.
>>> Regarding ufdbGuard, if I understood correctly, what you recommend is
>>> to use
>>> the ufdbConvertDB tool to convert my blacklists in plain text to the
>>> ufdbGuard database format? And then use that/those databases in
>>> normal squid
>>> ACL's?
>>
>> No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
>> replacement which works better while you improve your config.
>>
>> You should work towards less complexity. Squid / squid.conf is where
>> HTTP access control takes place. The helper is about re-writing the URL
>> (only) - which is a complex and destructive process.
> 
> ufdbGuard is a simple tool that has the same syntax in its configuration
> file as squidGuard has.
> It is far from complex, has a great Reference Manual, exmaple config
> file and a responsive support desk.
> Amos, I have never seen you calling a URL writer being a complex and
> destructive process.  What do you mean?

Re-writing requires Squid to:
 * fork external helpers, and
 * maintain queues of lookups to those helpers, and
 * maintain cache of helper responses, and
 * maintain a whole extra copy of HTTP-request state, and
 * copy some (not all) of that state info between the two "client" requests.

 ... lots of complexity, memory, CPU time, traffic latency, etc.

Also when used for access control (re-write to an "error" URL) the
re-write helper needs extra complexity in itself to act as the altered
origin server for error pages, or have some fourth-party web server.


> 
> URL rewriters have been used for decades for HTTP access control but you
> state "squid.conf is where HTTP access control takes place".

Once upon a time, back at the dawn of the WWW (before the 1990s) Squid
lacked external_acl_type and modular ACLs.

That persisted for the first decade or so of Squid's life, with only the
re-write API for admin to use for complicated permissions.

Then one day about 2 decades or so ago, external ACL was added and the
ACLs were also made much easier to implement and plug in new checks.
Today we have hundreds of native ACLs and even a selection of custom ACL
helpers. Making the need for these abuses of the poor re-writers.

Old habits and online tutorials however are hard to get rid of.


> Are you saying that you want it is the _only_ place for HTTP access
> control?


I'm saying the purpose of the url_rewrite_* API in Squid is to tell
Squid whether the URL (only) needs some mangling in order for the
server/origin to understand it.
 It can re-write transparently with all the problems that causes to
security scopes and URL sync between the endpoints. Or redirect the
client to the "correct" URL.


The Squid http_access and similar *access controls* are the place for
access control - hint is in the naming. With external ACL type for
anything Squid does not support natively or well. As Flashdown mentioned
even calls to SquidGuard etc. can be wrapped and used as external ACLs.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-20 Thread Matus UHLAR - fantomas

On 19.09.18 20:47, Donald Muller wrote:

So instead of using squidguard are you saying  you should use something like 
the following?

acl ads dstdomain -i "/etc/squid/squid-ads.acl"
acl adult dstdomain -i "/etc/squid/squid-adult.acl"

http_access deny ads
http_access deny adult

Do the lists need to be sorted in alphabetical order?


I don't think so - the lists are parsed to in -memory format for faster
processing.

The case where sw like ufdbguard is important is where you use regular
expressions like url_regex (but srcdom_regex and dstdom_regex may neet it
too).

Processing of those is very inefficient inside of squid.



-Original Message-
From: squid-users  On Behalf
Of Amos Jeffries
Sent: Tuesday, September 18, 2018 10:04 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Help: squid restarts and squidGuard die

On 19/09/18 1:54 AM, neok wrote:
> Thank you very much Amos for putting me in the right direction.
> I successfully carried out the modifications you indicated to me.
> Regarding ufdbGuard, if I understood correctly, what you recommend is
> to use the ufdbConvertDB tool to convert my blacklists in plain text
> to the ufdbGuard database format? And then use that/those databases in
> normal squid ACL's?

No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
replacement which works better while you improve your config.

You should work towards less complexity. Squid / squid.conf is where HTTP
access control takes place. The helper is about re-writing the URL
(only) - which is a complex and destructive process.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux is like a teepee: no Windows, no Gates and an apache inside...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-19 Thread Donald Muller
Amos,

So instead of using squidguard are you saying  you should use something like 
the following?

acl ads dstdomain -i "/etc/squid/squid-ads.acl"
acl adult dstdomain -i "/etc/squid/squid-adult.acl"

http_access deny ads
http_access deny adult

Do the lists need to be sorted in alphabetical order?

Don

> -Original Message-
> From: squid-users  On Behalf
> Of Amos Jeffries
> Sent: Tuesday, September 18, 2018 10:04 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Help: squid restarts and squidGuard die
> 
> On 19/09/18 1:54 AM, neok wrote:
> > Thank you very much Amos for putting me in the right direction.
> > I successfully carried out the modifications you indicated to me.
> > Regarding ufdbGuard, if I understood correctly, what you recommend is
> > to use the ufdbConvertDB tool to convert my blacklists in plain text
> > to the ufdbGuard database format? And then use that/those databases in
> > normal squid ACL's?
> 
> No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
> replacement which works better while you improve your config.
> 
> You should work towards less complexity. Squid / squid.conf is where HTTP
> access control takes place. The helper is about re-writing the URL
> (only) - which is a complex and destructive process.
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-19 Thread Marcus Kool



On 18/09/18 23:03, Amos Jeffries wrote:

On 19/09/18 1:54 AM, neok wrote:

Thank you very much Amos for putting me in the right direction.
I successfully carried out the modifications you indicated to me.
Regarding ufdbGuard, if I understood correctly, what you recommend is to use
the ufdbConvertDB tool to convert my blacklists in plain text to the
ufdbGuard database format? And then use that/those databases in normal squid
ACL's?


No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
replacement which works better while you improve your config.

You should work towards less complexity. Squid / squid.conf is where
HTTP access control takes place. The helper is about re-writing the URL
(only) - which is a complex and destructive process.


ufdbGuard is a simple tool that has the same syntax in its configuration file 
as squidGuard has.
It is far from complex, has a great Reference Manual, exmaple config file and a 
responsive support desk.
Amos, I have never seen you calling a URL writer being a complex and 
destructive process.  What do you mean?

URL rewriters have been used for decades for HTTP access control but you state 
"squid.conf is where HTTP access control takes place".
Are you saying that you want it is the _only_ place for HTTP access control?

Marcus



Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-19 Thread Enrico Heine
Thank you for this information Amos! :) I had ufdbguard as possible replacement 
in my list, your info about it beeing a fork, is the reason that I will switch 
to it soon. Thanks :)

Am 19. September 2018 04:03:39 MESZ schrieb Amos Jeffries 
:
>On 19/09/18 1:54 AM, neok wrote:
>> Thank you very much Amos for putting me in the right direction.
>> I successfully carried out the modifications you indicated to me.
>> Regarding ufdbGuard, if I understood correctly, what you recommend is
>to use
>> the ufdbConvertDB tool to convert my blacklists in plain text to the
>> ufdbGuard database format? And then use that/those databases in
>normal squid
>> ACL's?
>
>No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
>replacement which works better while you improve your config.
>
>You should work towards less complexity. Squid / squid.conf is where
>HTTP access control takes place. The helper is about re-writing the URL
>(only) - which is a complex and destructive process.
>
>Amos
>___
>squid-users mailing list
>squid-users@lists.squid-cache.org
>http://lists.squid-cache.org/listinfo/squid-users

-- 
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-18 Thread Amos Jeffries
On 19/09/18 1:54 AM, neok wrote:
> Thank you very much Amos for putting me in the right direction.
> I successfully carried out the modifications you indicated to me.
> Regarding ufdbGuard, if I understood correctly, what you recommend is to use
> the ufdbConvertDB tool to convert my blacklists in plain text to the
> ufdbGuard database format? And then use that/those databases in normal squid
> ACL's?

No, ufdbguard is a fork of SquidGuard that can be used as a drop-in
replacement which works better while you improve your config.

You should work towards less complexity. Squid / squid.conf is where
HTTP access control takes place. The helper is about re-writing the URL
(only) - which is a complex and destructive process.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-18 Thread neok
Thank you very much Amos for putting me in the right direction.
I successfully carried out the modifications you indicated to me.
Regarding ufdbGuard, if I understood correctly, what you recommend is to use
the ufdbConvertDB tool to convert my blacklists in plain text to the
ufdbGuard database format? And then use that/those databases in normal squid
ACL's?



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-17 Thread Flashdown
Just want to add, I use SquidGuard in two High load setups and never ran into 
issues. I didnt integrate it as url rewrite helper but as external acl helper 
and it works great with 800 Users.. 

Am 17. September 2018 20:38:06 MESZ schrieb Amos Jeffries 
:
>On 18/09/18 3:37 AM, Service MV wrote:
>> Dear Ones, I draw on your experience in seeking help to determine
>> whether or not it is possible to achieve the configuration I am
>looking
>> for, due to a strange error I am having.
>
>FYI: SquidGuard has not been maintained for many years now.
>
>I recommend you convert as many of your filtering rules as you can into
>normal Squid ACLs. Traffic which is being blocked for simple reasons
>can
>be done much more efficiently by Squid than a helper.
>
>You can use the more up-to-date ufdbguard helper as a drop-in
>replacement for squidguard during the conversion.
>
>
>
>> 
>> Before commenting on the bug I describe my testing environment:
>> - A VM CentOS 7 Core over VirtualBox 5.2, 1 NIC.
>> - My VM is attached to my domain W2012R2 (following this post
>>
>https://www.rootusers.com/how-to-join-centos-linux-to-an-active-directory-domain/)
>> to achieve kerberos authentication transparent to the user. SElinux
>> disabled. Owner permissions to user squid in all folders/files
>involved.
>> - squid 3.5.20 installed and working great with kerberos, NTLM and
>basic
>> authentication. All authentication mechanisms tested and working
>great.
>> - SquidGuard: 1.4 Berkeley DB 5.3.21 installed and working great with
>> blacklists and acl default.
>> 
>> My problem starts when I try to use source acl using ldapusersearch
>in
>> squidGuard... 
>> 
>> systemctl status squid:
>> (squid-1)[12627]: The redirector helpers are crashing too rapidly,
>need
>> help!
>> 
>> *squidGuard.conf*
>> 
>> dbhome /etc/squid/db
>> logdir /var/log/squidGuard
>> ldapbinddn
>>
>CN=ldap,OU=SERVICIOS,OU=SISTEMAS,OU=CANAL,OU=MYCOMPANY,DC=mydomain,DC=local
>> ldapbindpass myULTRAsecretPASS
>> ldapprotover 3
>> 
>> 
>> src WEB_BASIC {
>> ldapusersearch
>>
>ldap://dc-1.mydomain.local:3268/dc=mydomain,dc=local?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=WEB_BASIC%2cou=INTERNET%2cou=PERMISOS%2cou=MYCOMPANY%2cdc=mydomain%2cdc=local))
>> log block.log
>> }
>> 
>...
>> 
>> acl {
>> 
>> WEB_BASIC{
>> pass whitelist !BL_porn !blacklist all
>> redirect
>>
>http://s-server1.mydomain.local/cgi-bin/squidGuard.cgi?clientaddr=%a=%n=%i=%s=%t=%u
>> log block.log
>> }
>> 
>...
>
>
>> *squid.conf*
>> 
>> acl localnet src 10.10.8.0/22 # LAN net
>> acl dmz src 192.168.20.0/27   # DMZ net
>
>These ACLs are never used dues to what you are doing with the "auth"
>ACL.
>
>...
>> 
>> ### acl for proxy authentication (kerberos or ntlm) and ldap
>authorizations
>> acl auth proxy_auth REQUIRED
>> 
>> # Define protocols used for redirects
>> acl HTTP proto HTTP
>> acl HTTPS proto HTTPS
>
>These have nothing to do with redirects and are never used.
>
>> 
>> ### enforce authentication
>> http_access allow auth 
>> http_access deny !auth
>> 
>
>All possible traffic will match either "auth" or "!auth" above.
>
>That means no http_access rules following this point do anything.
>
>
>> ### standard access rules
>> http_access deny !Safe_ports 
>> http_access deny CONNECT !SSL_ports 
>> http_access allow localhost manager 
>> http_access deny manager
>
>Your custom http_access rules (eg the auth checks) should be down here
>so the basic security rules above have a chance to protect your proxy
>again DoS, traffic smuggling attacks etc. before more complicated and
>resource consuming things happen.
>
>
>> http_access allow localnet
>> http_access allow dmz
>> http_access allow localhost 
>> http_access deny all
>> 
>
>...
>> visible_hostname eren 
>
>The hostname needs to be a FQDN. It is delivered to clients in URLs
>generated by Squid so they can fetch objects directly from the proxy.
>
>FYI: Squid-3 should be able to automatically locate the hostname of the
>machine it is running on. If that is not working then you need to fix
>your machine, other software will be using the same mechanism and
>likewise be encountering problems.
>
>
>> httpd_suppress_version_string on 
>> uri_whitespace strip
>> 
>> 
>> ## squidGuard ##
>> url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
>> url_rewrite_children 10 startup=5 idle=1 concurrency=0
>> url_rewrite_bypass off
>> 
>> 
>
>Your traffic in your access.log is all CONNECT requests. Those messages
>cannot be re-written by SquidGuard. So at the very least you require
>this config line:
>
> url_rewrite_access deny CONNECT
>
>
>.. at this point you may notice your SG rules have no effect. This is
>one of many reasons why you should do access control in the proxy
>config, not externally in a complicated and slow helper.
>
>> 
>> *messages*
>> 
>> Sep 17 11:13:07 proxy kernel: squidGuard[12552]: segfault at
>> d7706bb0 ip 7fdbf2052e70 sp 7fffd1b73c70 error 5 in
>> libldap-2.4.so.2.10.7[7fdbf2027000+52000]
>> 

Re: [squid-users] Help: squid restarts and squidGuard die

2018-09-17 Thread Amos Jeffries
On 18/09/18 3:37 AM, Service MV wrote:
> Dear Ones, I draw on your experience in seeking help to determine
> whether or not it is possible to achieve the configuration I am looking
> for, due to a strange error I am having.

FYI: SquidGuard has not been maintained for many years now.

I recommend you convert as many of your filtering rules as you can into
normal Squid ACLs. Traffic which is being blocked for simple reasons can
be done much more efficiently by Squid than a helper.

You can use the more up-to-date ufdbguard helper as a drop-in
replacement for squidguard during the conversion.



> 
> Before commenting on the bug I describe my testing environment:
> - A VM CentOS 7 Core over VirtualBox 5.2, 1 NIC.
> - My VM is attached to my domain W2012R2 (following this post
> https://www.rootusers.com/how-to-join-centos-linux-to-an-active-directory-domain/)
> to achieve kerberos authentication transparent to the user. SElinux
> disabled. Owner permissions to user squid in all folders/files involved.
> - squid 3.5.20 installed and working great with kerberos, NTLM and basic
> authentication. All authentication mechanisms tested and working great.
> - SquidGuard: 1.4 Berkeley DB 5.3.21 installed and working great with
> blacklists and acl default.
> 
> My problem starts when I try to use source acl using ldapusersearch in
> squidGuard... 
> 
> systemctl status squid:
> (squid-1)[12627]: The redirector helpers are crashing too rapidly, need
> help!
> 
> *squidGuard.conf*
> 
> dbhome /etc/squid/db
> logdir /var/log/squidGuard
> ldapbinddn
> CN=ldap,OU=SERVICIOS,OU=SISTEMAS,OU=CANAL,OU=MYCOMPANY,DC=mydomain,DC=local
> ldapbindpass myULTRAsecretPASS
> ldapprotover 3
> 
> 
> src WEB_BASIC {
> ldapusersearch
> ldap://dc-1.mydomain.local:3268/dc=mydomain,dc=local?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=WEB_BASIC%2cou=INTERNET%2cou=PERMISOS%2cou=MYCOMPANY%2cdc=mydomain%2cdc=local))
> log block.log
> }
> 
...
> 
> acl {
> 
> WEB_BASIC{
> pass whitelist !BL_porn !blacklist all
> redirect
> http://s-server1.mydomain.local/cgi-bin/squidGuard.cgi?clientaddr=%a=%n=%i=%s=%t=%u
> log block.log
> }
> 
...


> *squid.conf*
> 
> acl localnet src 10.10.8.0/22 # LAN net
> acl dmz src 192.168.20.0/27   # DMZ net

These ACLs are never used dues to what you are doing with the "auth" ACL.

...
> 
> ### acl for proxy authentication (kerberos or ntlm) and ldap authorizations
> acl auth proxy_auth REQUIRED
> 
> # Define protocols used for redirects
> acl HTTP proto HTTP
> acl HTTPS proto HTTPS

These have nothing to do with redirects and are never used.

> 
> ### enforce authentication
> http_access allow auth 
> http_access deny !auth
> 

All possible traffic will match either "auth" or "!auth" above.

That means no http_access rules following this point do anything.


> ### standard access rules
> http_access deny !Safe_ports 
> http_access deny CONNECT !SSL_ports 
> http_access allow localhost manager 
> http_access deny manager

Your custom http_access rules (eg the auth checks) should be down here
so the basic security rules above have a chance to protect your proxy
again DoS, traffic smuggling attacks etc. before more complicated and
resource consuming things happen.


> http_access allow localnet
> http_access allow dmz
> http_access allow localhost 
> http_access deny all
> 

...
> visible_hostname eren 

The hostname needs to be a FQDN. It is delivered to clients in URLs
generated by Squid so they can fetch objects directly from the proxy.

FYI: Squid-3 should be able to automatically locate the hostname of the
machine it is running on. If that is not working then you need to fix
your machine, other software will be using the same mechanism and
likewise be encountering problems.


> httpd_suppress_version_string on 
> uri_whitespace strip
> 
> 
> ## squidGuard ##
> url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
> url_rewrite_children 10 startup=5 idle=1 concurrency=0
> url_rewrite_bypass off
> 
> 

Your traffic in your access.log is all CONNECT requests. Those messages
cannot be re-written by SquidGuard. So at the very least you require
this config line:

 url_rewrite_access deny CONNECT


.. at this point you may notice your SG rules have no effect. This is
one of many reasons why you should do access control in the proxy
config, not externally in a complicated and slow helper.

> 
> *messages*
> 
> Sep 17 11:13:07 proxy kernel: squidGuard[12552]: segfault at
> d7706bb0 ip 7fdbf2052e70 sp 7fffd1b73c70 error 5 in
> libldap-2.4.so.2.10.7[7fdbf2027000+52000]
> Sep 17 11:13:07 proxy kernel: squidGuard[12553]: segfault at
> a3d27bb0 ip 7fd79b787e70 sp 7ffe47e9b880 error 5 in
> libldap-2.4.so.2.10.7[7fd79b75c000+52000]

...

> 
> If I disable src and acl WEB_BASIC I have no problem. The default acl
> does its thing without problems.
> But when I enable src and acl WEB_BASIC squidGuard explodes and squid
> restarts so I get to notice.
> I see an error in a libldap 

[squid-users] Help: squid restarts and squidGuard die

2018-09-17 Thread Service MV
Dear Ones, I draw on your experience in seeking help to determine whether
or not it is possible to achieve the configuration I am looking for, due to
a strange error I am having.

Before commenting on the bug I describe my testing environment:
- A VM CentOS 7 Core over VirtualBox 5.2, 1 NIC.
- My VM is attached to my domain W2012R2 (following this post
https://www.rootusers.com/how-to-join-centos-linux-to-an-active-directory-domain/)
to achieve kerberos authentication transparent to the user. SElinux
disabled. Owner permissions to user squid in all folders/files involved.
- squid 3.5.20 installed and working great with kerberos, NTLM and basic
authentication. All authentication mechanisms tested and working great.
- SquidGuard: 1.4 Berkeley DB 5.3.21 installed and working great with
blacklists and acl default.

My problem starts when I try to use source acl using ldapusersearch in
squidGuard...

systemctl status squid:
(squid-1)[12627]: The redirector helpers are crashing too rapidly, need
help!

*squidGuard.conf*

dbhome /etc/squid/db
logdir /var/log/squidGuard
ldapbinddn
CN=ldap,OU=SERVICIOS,OU=SISTEMAS,OU=CANAL,OU=MYCOMPANY,DC=mydomain,DC=local
ldapbindpass myULTRAsecretPASS
ldapprotover 3


src WEB_BASIC {
ldapusersearch
ldap://dc-1.mydomain.local:3268/dc=mydomain,dc=local?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=WEB_BASIC%2cou=INTERNET%2cou=PERMISOS%2cou=MYCOMPANY%2cdc=mydomain%2cdc=local))
log block.log
}

dest BL_adv {
domainlist adv/domains
urllist adv/urls
log block.log
}

dest BL_aggressive {
domainlist aggressive/domains
urllist aggressive/urls
log block.log
}
#
dest BL_alcohol {
domainlist alcohol/domains
urllist alcohol/urls
log block.log
}
#
dest BL_anonvpn {
domainlist anonvpn/domains
urllist anonvpn/urls
log block.log
}
#
dest BL_chat {
domainlist chat/domains
urllist chat/urls
log block.log
}
#
dest BL_costtraps {
domainlist costtraps/domains
urllist costtraps/urls
log block.log
}
#
dest BL_downloads {
domainlist downloads/domains
urllist downloads/urls
log block.log
}
#
dest BL_drugs {
domainlist drugs/domains
urllist drugs/urls
log block.log
}
#
dest BL_dynamic {
domainlist dynamic/domains
log block.log
}
#
dest BL_fortunetelling {
domainlist fortunetelling/domains
urllist fortunetelling/urls
log block.log
}
#
dest BL_gamble {
domainlist gamble/domains
urllist gamble/urls
log block.log
}
#
dest BL_government {
domainlist government/domains
urllist government/urls
log block.log
}
#
dest BL_hacking {
domainlist hacking/domains
urllist hacking/urls
log block.log
}
#
dest BL_hobby_games-misc {
domainlist hobby/games-misc/domains
urllist hobby/games-misc/urls
log block.log
}
#
dest BL_hobby_games-online {
domainlist hobby/games-online/domains
urllist hobby/games-online/urls
log block.log
}
#
dest BL_movies {
domainlist movies/domains
urllist movies/urls
log block.log
}
#
dest BL_music {
domainlist music/domains
urllist music/urls
log block.log
}
#
dest BL_porn {
domainlist porn/domains
urllist porn/urls
log block.log
}
#
dest BL_radiotv {
domainlist radiotv/domains
urllist radiotv/urls
log block.log
}
#
dest BL_redirector {
domainlist redirector/domains
urllist redirector/urls
log block.log
}
#
dest BL_remotecontrol {
domainlist remotecontrol/domains
urllist remotecontrol/urls
log block.log
}
#
dest BL_ringtones {
domainlist ringtones/domains
urllist ringtones/urls
log block.log
}
#
dest BL_socialnet {
domainlist socialnet/domains
urllist socialnet/urls
log block.log
}
#
dest BL_spyware {
domainlist spyware/domains
urllist spyware/urls
log block.log
}
#
dest BL_tracker {
domainlist tracker/domains
urllist tracker/urls
log block.log
}
#
dest BL_updatesites {
domainlist updatesites/domains
urllist updatesites/urls
log block.log
}
#
dest BL_violence {
domainlist violence/domains
urllist violence/urls
log block.log
}
#
dest BL_warez {
domainlist warez/domains
urllist warez/urls
log block.log
}
#
dest BL_weapons {
domainlist weapons/domains
urllist weapons/urls
log block.log
}
#
dest BL_webphone {
domainlist webphone/domains
urllist webphone/urls
log block.log
}
#
dest BL_webradio {
domainlist webradio/domains
urllist webradio/urls
log block.log
}
#
dest BL_WEBTV {
domainlist webtv/domains
urllist webtv/urls
log block.log
}


dest whitelist {
domainlist whitelist/domains
log block.log
}

dest blacklist {
domainlist blacklist/domains
log block.log
}


acl {

WEB_BASIC {
pass whitelist !BL_porn !blacklist all
redirect
http://s-server1.mydomain.local/cgi-bin/squidGuard.cgi?clientaddr=%a=%n=%i=%s=%t=%u
log block.log
}

default {
pass !blacklist all
redirect
http://s-server1.mydomain.local/cgi-bin/squidGuard.cgi?clientaddr=%a=%n=%i=%s=%t=%u
log block.log
}

}


*squidGuard.log*

2018-09-17 11:13:39 [12663] New setting: dbhome: /etc/squid/db
2018-09-17 11:13:39 [12663] New setting: logdir: /var/log/squidGuard
2018-09-17 11:13:39 [12663] New setting: ldapbinddn:
CN=ldap,OU=SERVICIOS,OU=SISTEMAS,OU=CANAL,OU=MYCOMPANY,DC=mydomain,DC=local

Re: [squid-users] Help Team Squid

2018-08-12 Thread Amos Jeffries
On 13/08/18 12:30, John Renzi Manzo wrote:
> Good day team squid,
>                  Please help me,
>                  I am using squid 3.0 in our windows server 2012 r2, i
> already configure it.

First thing is please try an upgrade. Squid-3.0 was deprecated in 2010.

For more current packages see


>                  Ban sites and allow specific ip addresses to browse all
> sites, but the problem is is i cannot open our website. Please see
> attached file for your reference.

No attached config file.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help Team Squid

2018-08-12 Thread John Renzi Manzo
Good day team squid,
 Please help me,
 I am using squid 3.0 in our windows server 2012 r2, i
already configure it.
 Ban sites and allow specific ip addresses to browse all
sites, but the problem is is i cannot open our website. Please see attached
file for your reference.
  What will i do to access our website?
  Thank you in advance.

Renz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-09 Thread Amos Jeffries
On 09/05/18 18:36, Ilias Clifton wrote:
> Ubuntu box is able to connect to the internet ok. If client PCs are 
> configured to use the Ubuntu box as proxy on port 3128 it works correctly.
> 
> No hits in access.log for any transparent clients via wccp.. No network 
> response at all from Ubuntu.
> 
> 
> If I change the iptables REDIRECT to a DNAT
> iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j DNAT 
> --to-destination 172.28.28.252:3129
> 
> 
> I do get part of the TCP handshake done..
> 
> On the Ubuntu proxy I get :
> 
> on the wccp0 interface:
> IP 172.28.29.4.53057 > 216.58.203.100.80 SYN
> 
> on the ens33 interface:
> IP 216.58.203.100.80 > 172.28.29.4.53057 SYN,ACK
> 
> The client sees the SYN,ACK, it replies and thinks it has a session
> IP 172.28.29.4.53057 > 216.58.203.100.80 ACK
> IP 172.28.29.4.53057 > 216.58.203.100.80 GET / HTTP/1.1
> 
> But really these packets are lost and never make it back to the proxy.

So the problem is likely the router settings for how those packets are
handled. Anything you can figure to find out where they are going would
be useful.


> 
> I've tried adding the following iptables rules, but reply packets still have 
> the source ip as the original destination.
> 

Ah, that sounds like it is correct to me. The client thinks it is
talking to the origin server, not the proxy. So all the src-IP on the
reply packets have to be masqueraded as the origin server IP.


> iptables -t nat -A POSTROUTING -o ens33 -j MASQUERADE
> iptables -t nat -A POSTROUTING -o wccp0 -j MASQUERADE
> 
> Still no hits in the access.log
> 
> Should I be attempting to reply back down the gre tunnel with the REDIRECT, 
> or replying directly to the client via DNAT. Is there any change to the squid 
> config between these 2 options?

You configured Squid's return method as gre, so the gre tunnel should be
used for those packets. Or you could try configuring the router and
Squid as L2 return method - which seems to be the one half-working now.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-09 Thread Ilias Clifton
Ubuntu box is able to connect to the internet ok. If client PCs are configured 
to use the Ubuntu box as proxy on port 3128 it works correctly.

No hits in access.log for any transparent clients via wccp.. No network 
response at all from Ubuntu.


If I change the iptables REDIRECT to a DNAT
iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 172.28.28.252:3129


I do get part of the TCP handshake done..

On the Ubuntu proxy I get :

on the wccp0 interface:
IP 172.28.29.4.53057 > 216.58.203.100.80 SYN

on the ens33 interface:
IP 216.58.203.100.80 > 172.28.29.4.53057 SYN,ACK

The client sees the SYN,ACK, it replies and thinks it has a session
IP 172.28.29.4.53057 > 216.58.203.100.80 ACK
IP 172.28.29.4.53057 > 216.58.203.100.80 GET / HTTP/1.1

But really these packets are lost and never make it back to the proxy.

I've tried adding the following iptables rules, but reply packets still have 
the source ip as the original destination.

iptables -t nat -A POSTROUTING -o ens33 -j MASQUERADE
iptables -t nat -A POSTROUTING -o wccp0 -j MASQUERADE

Still no hits in the access.log

Should I be attempting to reply back down the gre tunnel with the REDIRECT, or 
replying directly to the client via DNAT. Is there any change to the squid 
config between these 2 options?

The clients are in a different subnet to the Ubuntu box if that makes any 
difference to how I should be replying.



 
 

Sent: Wednesday, May 09, 2018 at 3:08 PM
From: "Alex K" <rightkickt...@gmail.com>
To: "Ilias Clifton" <adili...@gmx.com>
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 
16

Is the ubuntu able to reach Internet?
Do you see any events at squid access log?
 
Alex
  

On Wed, May 9, 2018, 07:59 Ilias Clifton 
<adili...@gmx.com[mailto:adili...@gmx.com]> wrote:
 Hi Alex,

On the wccp0 interface I only see traffic arriving in 1 direction - original 
client ip to destination ip.

The ubuntu box only has a single ethernet interface -  Sorry, that should have 
been in my original question. I see the gre traffic arriving from the router, 
but again - no response.

I tried adding a MASQUERADE line to the iptables rules, just to see if it made 
a difference.. but same result.


 

Sent: Wednesday, May 09, 2018 at 2:37 PM
From: "Alex K" <rightkickt...@gmail.com[mailto:rightkickt...@gmail.com]>
To: "Ilias Clifton" <adili...@gmx.com[mailto:adili...@gmx.com]>
Cc: squid-users@lists.squid-cache.org[mailto:squid-users@lists.squid-cache.org]
Subject: Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 
16

Hi,
 
At the wccp0  interface do you see bidirectional http traffic? If the squid box 
has multiple interfaces, do you see traffic on its wan interface? That traffic 
might need NATing. Also I would check if squidbox drops any packages in case 
you have firewall configured on it.
 
Alex
  

On Wed, May 9, 2018, 07:22 Ilias Clifton 
<adili...@gmx.com[mailto:adili...@gmx.com][mailto:adili...@gmx.com[mailto:adili...@gmx.com]]>
 wrote:
Hello,
 
I've been trying to get WCCP working but have been banging my head against a 
wall, so thought I would ask for help.
 
There are 2 internal subnets that I would like to use the squid proxy: 
172.28.30.128/25[http://172.28.30.128/25][http://172.28.30.128/25%5Bhttp://172.28.30.128/25%5D]
 and 
172.28.29.0/25[http://172.28.29.0/25][http://172.28.29.0/25%5Bhttp://172.28.29.0/25%5D]
 
I have squid v3.5.25 running on Ubuntu 16 : 172.28.28.252
 
I have a Cisco 1841 - Adv IP - 12.4, see relevent config:
 
#Inside Interface
interface FastEthernet0/1
 ip address 172.28.28.1 255.255.255.240
 ip wccp web-cache redirect in
 ip nat inside
 ip virtual-reassembly max-reassemblies 64
 no ip mroute-cache
 duplex auto
 speed auto
 
#Loopback for wccp router ID
interface Loopback0
 ip address 172.28.28.33 255.255.255.255
 
ip wccp web-cache redirect-list PROXY_USERS group-list SQUID
 
ip access-list extended PROXY_USERS
 deny   tcp host 172.28.28.252 any
 permit tcp 172.28.30.128 0.0.0.127 any eq www
 permit tcp 172.28.29.0 0.0.0.127 any eq www
 deny   ip any any
 
ip access-list standard SQUID
 permit 172.28.28.252
 
 
 
On the Ubuntu box, I have the squid with the following config:
 
http_port 3128
http_port 3129 intercept 
acl localnet src 
172.28.28.0/22[http://172.28.28.0/22][http://172.28.28.0/22%5Bhttp://172.28.28.0/22%5D]
   
http_access allow localnet
http_access allow localhost
http_access deny all
visible_hostname Squid
wccp2_router 172.28.28.1
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0
 
If clients are manually set to use the proxy on port 3128, they work correctly.
 
Again on the Ubuntu box, I have setup the following gre tunnel.
 
ip tunnel add wccp0 mode gre remote 172.28.28.33 local 172.28.28.252 dev ens33 
ttl 255
 
and the following redirect using iptables..
 
ipta

Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-08 Thread Amos Jeffries
On 09/05/18 16:59, Ilias Clifton wrote:
> 
>  Hi Alex,
> 
> On the wccp0 interface I only see traffic arriving in 1 direction - original 
> client ip to destination ip.
> 
> The ubuntu box only has a single ethernet interface -  Sorry, that should 
> have been in my original question. I see the gre traffic arriving from the 
> router, but again - no response.
> 
> I tried adding a MASQUERADE line to the iptables rules, just to see if it 
> made a difference.. but same result.
> 

The MASQUERADE (or an equivalent SNAT) on the reply traffic going from
Squid back to the router is *definitely* needed to balance the REDIRECT
rule. Otherwise the router will reject or mishandle packets Squid sends
over the gre when you do get that part working.



> 
> Sent: Wednesday, May 09, 2018 at 2:37 PM
> From: "Alex K"
> 
> When I try and browse to a site from a client..
> $ wget http://www.google.com[http://www.google.com]
> 
> On the Ubuntu box, I see gre traffic on the ethernet interface..
> 00:44:22.340734 IP 172.28.28.33 > 172.28.28.252[http://172.28.28.252]: GREv0, 
> length 72: gre-proto-0x883e
> 
> 
> I see the un-encapsulated traffic on the wccp0 interface:
> 00:56:26.888519 IP 172.28.29.4.52128 > 216.58.203.100.80
> 
> Which is correctly showing original client IP and destination IP.
> 
> I can see hits on the iptable redirect rule:
> pkts bytes target     prot opt in     out     source               
> destination         
>   429 26280 REDIRECT   tcp  --  wccp0  any     anywhere             anywhere  
>            tcp dpt:http redir ports 3129
> 
> 
> But there is no response from squid on the Ubuntu box :-(

Is there outbound Squid<->server traffic happening? and what does that
look like?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-08 Thread Alex K
Is the ubuntu able to reach Internet?
Do you see any events at squid access log?

Alex


On Wed, May 9, 2018, 07:59 Ilias Clifton <adili...@gmx.com> wrote:

>
>  Hi Alex,
>
> On the wccp0 interface I only see traffic arriving in 1 direction -
> original client ip to destination ip.
>
> The ubuntu box only has a single ethernet interface -  Sorry, that should
> have been in my original question. I see the gre traffic arriving from the
> router, but again - no response.
>
> I tried adding a MASQUERADE line to the iptables rules, just to see if it
> made a difference.. but same result.
>
>
>
>
> Sent: Wednesday, May 09, 2018 at 2:37 PM
> From: "Alex K" <rightkickt...@gmail.com>
> To: "Ilias Clifton" <adili...@gmx.com>
> Cc: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on
> Ubuntu 16
>
> Hi,
>
> At the wccp0  interface do you see bidirectional http traffic? If the
> squid box has multiple interfaces, do you see traffic on its wan interface?
> That traffic might need NATing. Also I would check if squidbox drops any
> packages in case you have firewall configured on it.
>
> Alex
>
>
> On Wed, May 9, 2018, 07:22 Ilias Clifton <adili...@gmx.com[mailto:
> adili...@gmx.com]> wrote:
> Hello,
>
> I've been trying to get WCCP working but have been banging my head against
> a wall, so thought I would ask for help.
>
> There are 2 internal subnets that I would like to use the squid proxy:
> 172.28.30.128/25[http://172.28.30.128/25]
> <http://172.28.30.128/25%5Bhttp://172.28.30.128/25%5D> and
> 172.28.29.0/25[http://172.28.29.0/25]
> <http://172.28.29.0/25%5Bhttp://172.28.29.0/25%5D>
>
> I have squid v3.5.25 running on Ubuntu 16 : 172.28.28.252
>
> I have a Cisco 1841 - Adv IP - 12.4, see relevent config:
>
> #Inside Interface
> interface FastEthernet0/1
>  ip address 172.28.28.1 255.255.255.240
>  ip wccp web-cache redirect in
>  ip nat inside
>  ip virtual-reassembly max-reassemblies 64
>  no ip mroute-cache
>  duplex auto
>  speed auto
>
> #Loopback for wccp router ID
> interface Loopback0
>  ip address 172.28.28.33 255.255.255.255
>
> ip wccp web-cache redirect-list PROXY_USERS group-list SQUID
>
> ip access-list extended PROXY_USERS
>  deny   tcp host 172.28.28.252 any
>  permit tcp 172.28.30.128 0.0.0.127 any eq www
>  permit tcp 172.28.29.0 0.0.0.127 any eq www
>  deny   ip any any
>
> ip access-list standard SQUID
>  permit 172.28.28.252
>
>
>
> On the Ubuntu box, I have the squid with the following config:
>
> http_port 3128
> http_port 3129 intercept
> acl localnet src 172.28.28.0/22[http://172.28.28.0/22]
> <http://172.28.28.0/22%5Bhttp://172.28.28.0/22%5D>
> http_access allow localnet
> http_access allow localhost
> http_access deny all
> visible_hostname Squid
> wccp2_router 172.28.28.1
> wccp2_forwarding_method gre
> wccp2_return_method gre
> wccp2_service standard 0
>
> If clients are manually set to use the proxy on port 3128, they work
> correctly.
>
> Again on the Ubuntu box, I have setup the following gre tunnel.
>
> ip tunnel add wccp0 mode gre remote 172.28.28.33 local 172.28.28.252 dev
> ens33 ttl 255
>
> and the following redirect using iptables..
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j
> REDIRECT --to-ports 3129
>
> In sysctl.conf, I have disabled reverse path filtering and enabled ip
> forarding.
>
> net.ipv4.conf.default.rp_filter=0
> net.ipv4.conf.all.rp_filter=0
> net.ipv4.ip_forward=1
>
> When starting squid, using tcpdump, i see traffic between the Ubuntu box
> and the router on udp port 2048
>
> 00:39:34.587799 IP 172.28.28.252.2048 > 172.28.28.1.2048: UDP, length 144
> 00:39:34.590399 IP 172.28.28.1.2048 > 172.28.28.252.2048: UDP, length 140
>
> I see the following message on the router..
> %WCCP-5-SERVICEFOUND: Service web-cache acquired on WCCP client
> 172.28.28.252
>
> So looks like it's working ok so far...
>
> When I try and browse to a site from a client..
> $ wget http://www.google.com[http://www.google.com]
>
> On the Ubuntu box, I see gre traffic on the ethernet interface..
> 00:44:22.340734 IP 172.28.28.33 > 172.28.28.252[http://172.28.28.252]:
> GREv0, length 72: gre-proto-0x883e
>
>
> I see the un-encapsulated traffic on the wccp0 interface:
> 00:56:26.888519 IP 172.28.29.4.52128 > 216.58.203.100.80
>
> Which is correctly showing original client IP and destination IP.
>
> I can see hits on the iptable redirect rule:
> pkts bytes target prot opt in out sourc

Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-08 Thread Ilias Clifton

 Hi Alex,

On the wccp0 interface I only see traffic arriving in 1 direction - original 
client ip to destination ip.

The ubuntu box only has a single ethernet interface -  Sorry, that should have 
been in my original question. I see the gre traffic arriving from the router, 
but again - no response.

I tried adding a MASQUERADE line to the iptables rules, just to see if it made 
a difference.. but same result.


 

Sent: Wednesday, May 09, 2018 at 2:37 PM
From: "Alex K" <rightkickt...@gmail.com>
To: "Ilias Clifton" <adili...@gmx.com>
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 
16

Hi,
 
At the wccp0  interface do you see bidirectional http traffic? If the squid box 
has multiple interfaces, do you see traffic on its wan interface? That traffic 
might need NATing. Also I would check if squidbox drops any packages in case 
you have firewall configured on it.
 
Alex
  

On Wed, May 9, 2018, 07:22 Ilias Clifton 
<adili...@gmx.com[mailto:adili...@gmx.com]> wrote:
Hello,
 
I've been trying to get WCCP working but have been banging my head against a 
wall, so thought I would ask for help.
 
There are 2 internal subnets that I would like to use the squid proxy: 
172.28.30.128/25[http://172.28.30.128/25] and 
172.28.29.0/25[http://172.28.29.0/25]
 
I have squid v3.5.25 running on Ubuntu 16 : 172.28.28.252
 
I have a Cisco 1841 - Adv IP - 12.4, see relevent config:
 
#Inside Interface
interface FastEthernet0/1
 ip address 172.28.28.1 255.255.255.240
 ip wccp web-cache redirect in
 ip nat inside
 ip virtual-reassembly max-reassemblies 64
 no ip mroute-cache
 duplex auto
 speed auto
 
#Loopback for wccp router ID
interface Loopback0
 ip address 172.28.28.33 255.255.255.255
 
ip wccp web-cache redirect-list PROXY_USERS group-list SQUID
 
ip access-list extended PROXY_USERS
 deny   tcp host 172.28.28.252 any
 permit tcp 172.28.30.128 0.0.0.127 any eq www
 permit tcp 172.28.29.0 0.0.0.127 any eq www
 deny   ip any any
 
ip access-list standard SQUID
 permit 172.28.28.252
 
 
 
On the Ubuntu box, I have the squid with the following config:
 
http_port 3128
http_port 3129 intercept 
acl localnet src 172.28.28.0/22[http://172.28.28.0/22]   
http_access allow localnet
http_access allow localhost
http_access deny all
visible_hostname Squid
wccp2_router 172.28.28.1
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0
 
If clients are manually set to use the proxy on port 3128, they work correctly.
 
Again on the Ubuntu box, I have setup the following gre tunnel.
 
ip tunnel add wccp0 mode gre remote 172.28.28.33 local 172.28.28.252 dev ens33 
ttl 255
 
and the following redirect using iptables..
 
iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j REDIRECT 
--to-ports 3129
 
In sysctl.conf, I have disabled reverse path filtering and enabled ip forarding.
 
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.ipv4.ip_forward=1

When starting squid, using tcpdump, i see traffic between the Ubuntu box and 
the router on udp port 2048

00:39:34.587799 IP 172.28.28.252.2048 > 172.28.28.1.2048: UDP, length 144
00:39:34.590399 IP 172.28.28.1.2048 > 172.28.28.252.2048: UDP, length 140

I see the following message on the router..
%WCCP-5-SERVICEFOUND: Service web-cache acquired on WCCP client 172.28.28.252

So looks like it's working ok so far...

When I try and browse to a site from a client..
$ wget http://www.google.com[http://www.google.com]

On the Ubuntu box, I see gre traffic on the ethernet interface..
00:44:22.340734 IP 172.28.28.33 > 172.28.28.252[http://172.28.28.252]: GREv0, 
length 72: gre-proto-0x883e


I see the un-encapsulated traffic on the wccp0 interface:
00:56:26.888519 IP 172.28.29.4.52128 > 216.58.203.100.80

Which is correctly showing original client IP and destination IP.

I can see hits on the iptable redirect rule:
pkts bytes target     prot opt in     out     source               destination  
       
  429 26280 REDIRECT   tcp  --  wccp0  any     anywhere             anywhere    
         tcp dpt:http redir ports 3129


But there is no response from squid on the Ubuntu box :-(

I don't see anything helpful in either access.log or cache.log.

I'm not sure if there is anything else that could be dropping the packet apart 
from return path filtering..

If someone could give me some pointers or any further debugging I could try, 
that would be great.


Thanks.







 
 
 
 
 
 
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org[mailto:squid-users@lists.squid-cache.org]
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-08 Thread Alex K
Hi,

At the wccp0  interface do you see bidirectional http traffic? If the squid
box has multiple interfaces, do you see traffic on its wan interface? That
traffic might need NATing. Also I would check if squidbox drops any
packages in case you have firewall configured on it.

Alex



On Wed, May 9, 2018, 07:22 Ilias Clifton  wrote:

>
> Hello,
>
> I've been trying to get WCCP working but have been banging my head against
> a wall, so thought I would ask for help.
>
> There are 2 internal subnets that I would like to use the squid proxy:
> 172.28.30.128/25 and 172.28.29.0/25
>
> I have squid v3.5.25 running on Ubuntu 16 : 172.28.28.252
>
> I have a Cisco 1841 - Adv IP - 12.4, see relevent config:
>
> #Inside Interface
> interface FastEthernet0/1
>  ip address 172.28.28.1 255.255.255.240
>  ip wccp web-cache redirect in
>  ip nat inside
>  ip virtual-reassembly max-reassemblies 64
>  no ip mroute-cache
>  duplex auto
>  speed auto
>
> #Loopback for wccp router ID
> interface Loopback0
>  ip address 172.28.28.33 255.255.255.255
>
> ip wccp web-cache redirect-list PROXY_USERS group-list SQUID
>
> ip access-list extended PROXY_USERS
>  deny   tcp host 172.28.28.252 any
>  permit tcp 172.28.30.128 0.0.0.127 any eq www
>  permit tcp 172.28.29.0 0.0.0.127 any eq www
>  deny   ip any any
>
> ip access-list standard SQUID
>  permit 172.28.28.252
>
>
>
> On the Ubuntu box, I have the squid with the following config:
>
> http_port 3128
> http_port 3129 intercept
> acl localnet src 172.28.28.0/22
> http_access allow localnet
> http_access allow localhost
> http_access deny all
> visible_hostname Squid
> wccp2_router 172.28.28.1
> wccp2_forwarding_method gre
> wccp2_return_method gre
> wccp2_service standard 0
>
> If clients are manually set to use the proxy on port 3128, they work
> correctly.
>
> Again on the Ubuntu box, I have setup the following gre tunnel.
>
> ip tunnel add wccp0 mode gre remote 172.28.28.33 local 172.28.28.252 dev
> ens33 ttl 255
>
> and the following redirect using iptables..
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j
> REDIRECT --to-ports 3129
>
> In sysctl.conf, I have disabled reverse path filtering and enabled ip
> forarding.
>
> net.ipv4.conf.default.rp_filter=0
> net.ipv4.conf.all.rp_filter=0
> net.ipv4.ip_forward=1
>
> When starting squid, using tcpdump, i see traffic between the Ubuntu box
> and the router on udp port 2048
>
> 00:39:34.587799 IP 172.28.28.252.2048 > 172.28.28.1.2048: UDP, length 144
> 00:39:34.590399 IP 172.28.28.1.2048 > 172.28.28.252.2048: UDP, length 140
>
> I see the following message on the router..
> %WCCP-5-SERVICEFOUND: Service web-cache acquired on WCCP client
> 172.28.28.252
>
> So looks like it's working ok so far...
>
> When I try and browse to a site from a client..
> $ wget http://www.google.com
>
> On the Ubuntu box, I see gre traffic on the ethernet interface..
> 00:44:22.340734 IP 172.28.28.33 > 172.28.28.252: GREv0, length 72:
> gre-proto-0x883e
>
>
> I see the un-encapsulated traffic on the wccp0 interface:
> 00:56:26.888519 IP 172.28.29.4.52128 > 216.58.203.100.80
>
> Which is correctly showing original client IP and destination IP.
>
> I can see hits on the iptable redirect rule:
> pkts bytes target prot opt in out source
>  destination
>   429 26280 REDIRECT   tcp  --  wccp0  any anywhere
>  anywhere tcp dpt:http redir ports 3129
>
>
> But there is no response from squid on the Ubuntu box :-(
>
> I don't see anything helpful in either access.log or cache.log.
>
> I'm not sure if there is anything else that could be dropping the packet
> apart from return path filtering..
>
> If someone could give me some pointers or any further debugging I could
> try, that would be great.
>
>
> Thanks.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help with WCCP: Cisco 1841 to Squid 3.5.25 on Ubuntu 16

2018-05-08 Thread Ilias Clifton

Hello,
 
I've been trying to get WCCP working but have been banging my head against a 
wall, so thought I would ask for help.
 
There are 2 internal subnets that I would like to use the squid proxy: 
172.28.30.128/25 and 172.28.29.0/25
 
I have squid v3.5.25 running on Ubuntu 16 : 172.28.28.252
 
I have a Cisco 1841 - Adv IP - 12.4, see relevent config:
 
#Inside Interface
interface FastEthernet0/1
 ip address 172.28.28.1 255.255.255.240
 ip wccp web-cache redirect in
 ip nat inside
 ip virtual-reassembly max-reassemblies 64
 no ip mroute-cache
 duplex auto
 speed auto
 
#Loopback for wccp router ID
interface Loopback0
 ip address 172.28.28.33 255.255.255.255
 
ip wccp web-cache redirect-list PROXY_USERS group-list SQUID
 
ip access-list extended PROXY_USERS
 deny   tcp host 172.28.28.252 any
 permit tcp 172.28.30.128 0.0.0.127 any eq www
 permit tcp 172.28.29.0 0.0.0.127 any eq www
 deny   ip any any
 
ip access-list standard SQUID
 permit 172.28.28.252
 
 
 
On the Ubuntu box, I have the squid with the following config:
 
http_port 3128
http_port 3129 intercept 
acl localnet src 172.28.28.0/22   
http_access allow localnet
http_access allow localhost
http_access deny all
visible_hostname Squid
wccp2_router 172.28.28.1
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0
 
If clients are manually set to use the proxy on port 3128, they work correctly.
 
Again on the Ubuntu box, I have setup the following gre tunnel.
 
ip tunnel add wccp0 mode gre remote 172.28.28.33 local 172.28.28.252 dev ens33 
ttl 255
 
and the following redirect using iptables..
 
iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j REDIRECT 
--to-ports 3129
 
In sysctl.conf, I have disabled reverse path filtering and enabled ip forarding.
 
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.ipv4.ip_forward=1

When starting squid, using tcpdump, i see traffic between the Ubuntu box and 
the router on udp port 2048

00:39:34.587799 IP 172.28.28.252.2048 > 172.28.28.1.2048: UDP, length 144
00:39:34.590399 IP 172.28.28.1.2048 > 172.28.28.252.2048: UDP, length 140

I see the following message on the router..
%WCCP-5-SERVICEFOUND: Service web-cache acquired on WCCP client 172.28.28.252

So looks like it's working ok so far...

When I try and browse to a site from a client..
$ wget http://www.google.com

On the Ubuntu box, I see gre traffic on the ethernet interface..
00:44:22.340734 IP 172.28.28.33 > 172.28.28.252: GREv0, length 72: 
gre-proto-0x883e


I see the un-encapsulated traffic on the wccp0 interface:
00:56:26.888519 IP 172.28.29.4.52128 > 216.58.203.100.80

Which is correctly showing original client IP and destination IP.

I can see hits on the iptable redirect rule:
pkts bytes target prot opt in out source   destination  
   
  429 26280 REDIRECT   tcp  --  wccp0  any anywhere anywhere
 tcp dpt:http redir ports 3129


But there is no response from squid on the Ubuntu box :-(

I don't see anything helpful in either access.log or cache.log.

I'm not sure if there is anything else that could be dropping the packet apart 
from return path filtering..

If someone could give me some pointers or any further debugging I could try, 
that would be great.


Thanks.







 
 
 
 
 
 
 
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with the error TCP_MISS_ABORTED/000

2018-02-28 Thread Juan Manuel P
Amos, tell me what more you need to analyze the incident.

Every time that I access to this  http://www.rionegro.gov.ar
 I have the error
TCP_MISS_ABORTED/000, but also if  access to ssl version
https://www.rionegro.gov.ar
the error NOT occur.


regards.




2018-02-28 1:00 GMT-03:00 Amos Jeffries :

> On 28/02/18 15:12, L A Walsh wrote:
> > Juan Manuel P wrote:
> >> I am using Squid Cache: Version 3.5.12, but some pages give me the
> >> next error:
> >>
> >> 1/Feb/2018:18:14:40 -0300 || - || 10.12.43.20 ||
> >> TCP_MISS_ABORTED/000|| GET ||
> >> http://www.rionegro.gov.ar/download/images/00033494.jpg
> >>  || -
> > 
> >I don't know what causes it, but I see it frequently and have for a
> few
> > years.  Currently am running 3.15.3.  Was told it might be due to some
>
> Er, which version exactly?
>
> > cache corruption -- but having removed it several times over the past
> > few years, I sorta doubt that.  Also, I'm attempting https interception,
> > now
> > but wasn't when I first encountered this message...
>
> It can come from many reasons. One has to look at all the clues about
> where the message came from, which (if any) server was involved, how
> long it took, etc.
>
> I could write a book on the things and ways it *might* happen. But
> whether any of that is relevant or just hand-wavey ideas is anyones
> guess at present.
>
>
> From what I know about it currently; corruption HTTP cache entries might
> be a side-effect, but very unlikely to be a cause of this.
>
> Other types of caches used by Squid may be involved, eg the DNS cache
> pointing Squid to a server IP that is not responding. But I don't recall
> any instances of that data being corrupted, and the duration of the
> transaction is far too short to have been some kind of timeout on the
> server side of things (immediate TCP reject from invalid server IPs
> shows up completely differently.)
>
>
> * ABORTED almost always means the client[1] disconnected from the TCP
> connection. That can happen for any number of reasons at the TCP, IP and
> Ethernet layers which Squid is not party to.
>
> ... emphasis on "almost". In the case of CONNECT messages and NTLM
> Pinned connections the server can also trigger disconnect for both
> endpoints.
>
>
> * HTTPS introduces several additional possible causes. Primarily that
> CONNECT message server abort just mentioned. But also if SSL-Bump is
> being done, any TLS errors that result in "terminate" action will also
> show up as aborts at the TCP and HTTP(S) layers. Squid logging of those
> cases has been a bit buggy and there may be issues still yet to be found
> there.
>  That does not explain the pre-HTTPS occurances. But also due to the
> vague nature of the abort reason those earlier aborts may be completely
> different in cause from your current ones.
>
>
> The "/000" status means no HTTP reply was received from a server. Abort
> can also happen any time *after* a server response is received, but
> those should be logged with their status codes not 000.
>  - That may be because no server was even contacted. ie the client
> disconnected immediately, or during the wait for server DNS records to
> arrive, or for any other reason the client has.
>
>
> SSL-Bump clouds the issue a lot because it will naturally time some
> amount of time to do any of its checks and any server contact for server
> cert details etc. Again the client can disconnect for any number of
> reasons during all that, with the same ABORTED/000 end result.
>
>
> So. Maybe bug, maybe not. "insufficient data".
>
>
> >
> >Out of last 5000 requests in my squid log, I see 101 of the
> miss_aborted
> > statuses.  I just wrote a note on stackexchange, then went to look for
> > something on amazon (this is output from a squid log compression tool
> > that was mostly for listing site, request time and size:  A few lines
> > from no more than 15 minutes ago (usually shows time between requests,
> but
> > periodically, there's a full timestamp)...
> >
> >Part of my shortening process removes the TCP_ before error messages,
> > thus my error is just "MISS_ABORTED".
> >
> > Since this is a grep of that shortened log, the time increments since
> > last message are not referring to line immediately above in the grep:
> >
> >
> > [0227_172940.00]  379ms; 0(0/0) MISS_ABORTED/000  > https://qa.sockets.stackexchange.com/ - 198.252.206.25 -]
> >  +0.38   372ms; 0(0/0) MISS_ABORTED/000  > https://qa.sockets.stackexchange.com/ - 198.252.206.25 -]
> >  +0.01 1ms; 0(0/0) MISS_ABORTED/000  > https://images-na.ssl-images-amazon.com/images/I/
> 61x0MG3xpeL._AC_UL160_SR160,160_.jpg
> > - 54.230.117.34 -]
> >  +0.00 0ms; 0(-/-) MISS_ABORTED/000  > 

Re: [squid-users] help with the error TCP_MISS_ABORTED/000

2018-02-27 Thread L A Walsh

Juan Manuel P wrote:
I am using Squid Cache: Version 3.5.12, but some pages give me the 
next error:


1/Feb/2018:18:14:40 -0300 || - || 10.12.43.20 || 
TCP_MISS_ABORTED/000|| GET || 
http://www.rionegro.gov.ar/download/images/00033494.jpg 
 || -


   I don't know what causes it, but I see it frequently and have for a few
years.  Currently am running 3.15.3.  Was told it might be due to some
cache corruption -- but having removed it several times over the past
few years, I sorta doubt that.  Also, I'm attempting https interception, now
but wasn't when I first encountered this message...

   Out of last 5000 requests in my squid log, I see 101 of the miss_aborted
statuses.  I just wrote a note on stackexchange, then went to look for
something on amazon (this is output from a squid log compression tool
that was mostly for listing site, request time and size:  A few lines
from no more than 15 minutes ago (usually shows time between requests, but
periodically, there's a full timestamp)...

   Part of my shortening process removes the TCP_ before error messages,
thus my error is just "MISS_ABORTED".

Since this is a grep of that shortened log, the time increments since
last message are not referring to line immediately above in the grep:


[0227_172940.00]  379ms; 0(0/0) MISS_ABORTED/000 https://qa.sockets.stackexchange.com/ - 198.252.206.25 -]
 +0.38   372ms; 0(0/0) MISS_ABORTED/000 https://qa.sockets.stackexchange.com/ - 198.252.206.25 -]
 +0.01 1ms; 0(0/0) MISS_ABORTED/000 https://images-na.ssl-images-amazon.com/images/I/61x0MG3xpeL._AC_UL160_SR160,160_.jpg 
- 54.230.117.34 -]
 +0.00 0ms; 0(-/-) MISS_ABORTED/000 https://images-na.ssl-images-amazon.com/images/I/813zL5eetaL._AC_UL160_SR160,160_.jpg 
- 54.230.117.34 -]
 +0.00 0ms; 0(-/-) MISS_ABORTED/000 https://images-na.ssl-images-amazon.com/images/I/61Uo2hXZlpL._AC_UL160_SR160,160_.jpg 
- 54.230.117.34 -]
 +0.00 0ms; 0(-/-) MISS_ABORTED/000 https://images-na.ssl-images-amazon.com/images/I/71XggjYZ7qL._AC_UL160_SR160,160_.jpg 
- 54.230.117.34 -]
 +0.00 1ms; 0(-/0) MISS_ABORTED/000 https://images-na.ssl-images-amazon.com/images/I/71LT8PAs-OL._AC_UL160_SR160,160_.jpg 
- 54.230.117.34 -]
 +0.00 1ms; 0(-/0) MISS_ABORTED/000 https://images-na.ssl-images-amazon.com/images/I/51X+70QICxL._AC_UL160_SR160,160_.jpg 
- 54.230.117.34 -]
[0227_173215.00]   16ms; 0(0/0) MISS_ABORTED/000 https://www.amazon.com/gp/uedata?ul=0.200071.0=TXSV6J9MMKFJ764232PB=1=1=TXSV6J9MMKFJ764232PB=3758697=-286=1=3=3758697=1519781535329=mouseHit=Detail=Glance=B079GH97R9=TXSV6J9MMKFJ764232PB=1 
- 23.192.244.68 -]




Of note -- a bunch were in trying to fetch a sockets address on 
stackexchange, , while most of the amazon lines seem to be referring to 
jpgs.  Anyway, I too would be interested if you find the answer.


Just thought I'd mention that your seeing the message isn't unique.

Found someone else who asked the same question back in May 2015...


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with the error TCP_MISS_ABORTED/000

2018-02-26 Thread Yuri
1519672183.376  3 192.168.201.10 TCP_MEM_HIT/200 99641 GET
http://www.rioneg
ro.gov.ar/download/images/00033494.jpg - HIER_NONE/- image/jpeg

Request size = 99,641

No problem on 3.5.27 and 5.0.0.

Try to upgrade proxy first.


27.02.2018 00:57, Juan Manuel P пишет:
> I am using Squid Cache: Version 3.5.12, but some pages give me the
> next error:
>
> 1/Feb/2018:18:14:40 -0300 || - || 10.12.43.20 ||
> TCP_MISS_ABORTED/000|| GET ||
> http://www.rionegro.gov.ar/download/images/00033494.jpg
>  || -
>
> And load so slowly.
>
> I investigate that the problem can origin in this param -->
> dns_v4_first on , so I configured and restarting the server. But the
> error still append.
>
> Can someone help me please ?
>
> Regards.
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
*
* C++20 : Bug to the future *
*



signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] help with the error TCP_MISS_ABORTED/000

2018-02-26 Thread Juan Manuel P
I am using Squid Cache: Version 3.5.12, but some pages give me the next
error:

1/Feb/2018:18:14:40 -0300 || - || 10.12.43.20 || TCP_MISS_ABORTED/000|| GET
|| http://www.rionegro.gov.ar/download/images/00033494.jpg || -

And load so slowly.

I investigate that the problem can origin in this param --> dns_v4_first on
, so I configured and restarting the server. But the error still append.

Can someone help me please ?

Regards.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with UA filtering in https connections

2018-01-03 Thread Alex Rousskov
On 01/03/2018 10:38 AM, Matus UHLAR - fantomas wrote:

>> In a general case, the admin has to pick between two evils:
>>
>> * Allow TLS handshakes with arbitrary servers on TLS ports (my sketch)
>>
>> * or tell Squid to respond with error pages that the user cannot see
>>  (without bypassing browser security warnings).
>>
>> Which evil is lesser is up to the admin to decide.


>> (*) We should allow CONNECTs to SSL_ports, not Safe_ports. I hope my
>> sketch did not use those ACLs.

> I'm afraid you did.

I did not:
http://lists.squid-cache.org/pipermail/squid-users/2017-December/017268.html

I used toSafePorts which is not one of the default ACLs (but may contain
them). The admin should define the ACLs left out of the sketch
correctly, of course. Moreover, I would rename toSafePorts to
toConnectableDestinations or similar to emphasize that this is the right
place to ban CONNECTs to wrong/dangerous/etc. addresses.


> I'm also afraid that your proposal also prevents us from disabling
> CONNECTs later

If you are saying that my simple sketch does not address all possible
use cases, then I certainly agree! I believe it addressed what OP
requested, but if I misinterpreted his or her desires, I apologize. I
hope the general description quoted at the start of this email combined
with Amos and yours warnings about undesirable CONNECT destinations will
allow them to fix their configuration as needed.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with UA filtering in https connections

2018-01-03 Thread Matus UHLAR - fantomas

On 01/03/2018 05:52 AM, Matus UHLAR - fantomas wrote:

On 02.01.18 09:06, Alex Rousskov wrote:

On 01/02/2018 07:08 AM, Matus UHLAR - fantomas wrote:

On 02.01.18 06:04, squidnoob wrote:

http_access allow CONNECT safe_ports
http_access deny CONNECT



the two lines above unconditionally allow CONNECT anywhere,



This is incorrect. The lines deny CONNECT to unsafe ports.



Those lines unconditionally allow CONNECT requests to safe ports ANYWHERE,


On 03.01.18 08:55, Alex Rousskov wrote:

Yes, or, to be more precise, they (together with ssl_bump rules) allow
fetching of any server certificate from a reasonable(*) port. They do
not allow HTTP requests to arbitrary safe ports. Only Squid-generated
TLS handshakes.




which is apparently not what was wanted/expected.


Why not?


because there can be many reasons to deny CONNECT request for example
destined to localhost or internal network.

in the default config, these directives are at the beginning, before
checking for allowed clients and destinations is done.

in the provided config:
http://lists.squid-cache.org/pipermail/squid-users/2017-December/017267.html

there are no deny directives before "http_access allow SSL_port"
and so it's quite possible that all clients that should not have access will
be allowed.

of course, there MAY be other directives or measures to avoid that
but I really think it's better to put "deny CONNECT !SSL_ports"
than allow CONNECT and later wonder why some requests are not 




that in this case you can[not] deny the connect request later,


Denying CONNECTs at step1 does not really work well in a general case
because, during SslBump step1, Squid does not have enough information to
generate the right certificate for the access denial error page.


I don't think this matters when we do have "http_access deny CONNECT" in
both cases.


In a general case, the admin has to pick between two evils:

* Allow TLS handshakes with arbitrary servers on TLS ports (my sketch)

* or tell Squid to respond with error pages that the user cannot see
 (without bypassing browser security warnings).

Which evil is lesser is up to the admin to decide. Needless to say,
there are environments where both strategies should be used, depending
on the transaction parameters.


(*) We should allow CONNECTs to SSL_ports, not Safe_ports. I hope my
sketch did not use those ACLs.


I'm afraid you did.
+and I'm also afraid that your proposal also prevents us from disabling
CONNECTs later:

http://lists.squid-cache.org/pipermail/squid-users/2017-December/017268.html

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Atheism is a non-prophet organization. 
___

squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with UA filtering in https connections

2018-01-03 Thread Alex Rousskov
On 01/03/2018 05:52 AM, Matus UHLAR - fantomas wrote:
> On 02.01.18 09:06, Alex Rousskov wrote:
>> On 01/02/2018 07:08 AM, Matus UHLAR - fantomas wrote:
>>> On 02.01.18 06:04, squidnoob wrote:
 http_access allow CONNECT safe_ports
 http_access deny CONNECT

>>> the two lines above unconditionally allow CONNECT anywhere,

>> This is incorrect. The lines deny CONNECT to unsafe ports.

> Those lines unconditionally allow CONNECT requests to safe ports ANYWHERE,

Yes, or, to be more precise, they (together with ssl_bump rules) allow
fetching of any server certificate from a reasonable(*) port. They do
not allow HTTP requests to arbitrary safe ports. Only Squid-generated
TLS handshakes.


> which is apparently not what was wanted/expected.

Why not?


> that in this case you can[not] deny the connect request later,

Denying CONNECTs at step1 does not really work well in a general case
because, during SslBump step1, Squid does not have enough information to
generate the right certificate for the access denial error page.

In a general case, the admin has to pick between two evils:

* Allow TLS handshakes with arbitrary servers on TLS ports (my sketch)

* or tell Squid to respond with error pages that the user cannot see
  (without bypassing browser security warnings).

Which evil is lesser is up to the admin to decide. Needless to say,
there are environments where both strategies should be used, depending
on the transaction parameters.


(*) We should allow CONNECTs to SSL_ports, not Safe_ports. I hope my
sketch did not use those ACLs.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help with UA filtering in https connections

2018-01-03 Thread Matus UHLAR - fantomas

On 03.01.18 13:52, Matus UHLAR - fantomas wrote:

http_access deny CONNECT !safe_ports

... in this case you can deny the connect request later, unlike the
previous example, where the CONNECT was allowed and further checks are done.


corrected: _no_ futher checks are done.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Nothing is fool-proof to a talented fool. 
___

squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   8   9   10   >