Re: [squid-users] Is it possible to apply squid delay pools on users/groups from AD ?

2017-11-16 Thread Amos Jeffries

On 16/11/17 01:43, Bike dernikov1 wrote:

Hi,
this is my second topic, i wouldn't wan to mix with first. I hope that is ok.
i hope that someone succeeded  to apply delay pools on users/groups from AD.
We are now using  delay pool  on whole 10.0.0.0/8, but that is a
problem as different users have different requirements.   We have 30
locations, and we can set different rules by ip, but than we would
need one rule for one location, we would need to use static ip,
network reconfiguration, but that solution would be nightmare for
administration, and we would like to avoid static ip-s for users.


It depends on your Squid version.

The latest Squid with annotation support are capable of receiving 
user/group names from the auth and external ACL helpers. These get 
attached to the transaction and can be matched with the 'note' type ACL 
in any later 'fast-category' access controls like delay_pools.


If your Squid is too old to use note ACL, or your helper(s) not 
providing the relevant details to Squid (in Squid-3.4+ helper syntax). 
Then no, sorry.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump for regex URL comparison

2017-11-16 Thread Amos Jeffries

On 16/11/17 02:32, Joe Foster wrote:

Good afternoon,

I have a small router onto which I have installed Squid.

I am trying to filter HTTPS urls for bad words on a blocked list.

It will require the client on the safe side of the router to install the
certificate, this isn't an issue as it's an open process and not an
illigal MITM attack.

Below is my squid.conf

As you will see I have been playing around with where to put the code
and what code to put in.

I only have a small amount of flash drive so I have put the auto-gen
cert directory in /tmp/. I am aware this is volatile memory but until I
have a better solution I will be doing this.


Since /tmp is subject to random deletion of content you will need to 
make sure you always shutdown Squid and re-run the ssl_crtd (etc.) 
create command to re-generate the cert DB structures whenever the device 
erases its /tmp content. Otherwise your proxy will crash and/or client 
connections will start being terminated with strange looking errors.



IMO you would probably be better off setting the cert DB to a very small 
size suitable for your limited space - or disabling it entirely [more on 
that below].




I have put a firewall rule in to forward 443 to 3128.

https://wiki.squid-cache.org/Features/SslBump
https://wiki.squid-cache.org/SquidFaq/SquidAcl

I also don't want to cache due to flash drive issues. Is this possible?



From the documentation of the SSL-Bump settings:
 
"
  dynamic_cert_mem_cache_size=SIZE
Approximate total RAM size spent on cached generated
certificates. If set to zero, caching is disabled. The
default value is 4MB.
"


Its the same cert in /root/ and /certs/ before anyone points it out.

Nothing has been appearing in the log files either but this is no
surprise.

Been up till 1am last few nights on this so you assistance is very
appreciated.


That sounds like you are having a problem. But I don't see any mention 
of what that is exactly.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump for regex URL comparison

2017-11-16 Thread Joe Foster
Hello Amos,

The problem is the connections are not getting through. It just acts like
there is no WiFi connection.

Adding the cert db every start up isn’t an issue.

I was thinking of having a small cert cache locally instead thinking about
it since.

The connections just aren’t being made. No ssl warning.

Thank you

Joe


On Thu, 16 Nov 2017 at 08:15, Amos Jeffries  wrote:

> On 16/11/17 02:32, Joe Foster wrote:
> > Good afternoon,
> >
> > I have a small router onto which I have installed Squid.
> >
> > I am trying to filter HTTPS urls for bad words on a blocked list.
> >
> > It will require the client on the safe side of the router to install the
> > certificate, this isn't an issue as it's an open process and not an
> > illigal MITM attack.
> >
> > Below is my squid.conf
> >
> > As you will see I have been playing around with where to put the code
> > and what code to put in.
> >
> > I only have a small amount of flash drive so I have put the auto-gen
> > cert directory in /tmp/. I am aware this is volatile memory but until I
> > have a better solution I will be doing this.
>
> Since /tmp is subject to random deletion of content you will need to
> make sure you always shutdown Squid and re-run the ssl_crtd (etc.)
> create command to re-generate the cert DB structures whenever the device
> erases its /tmp content. Otherwise your proxy will crash and/or client
> connections will start being terminated with strange looking errors.
>
>
> IMO you would probably be better off setting the cert DB to a very small
> size suitable for your limited space - or disabling it entirely [more on
> that below].
>
> >
> > I have put a firewall rule in to forward 443 to 3128.
> >
> > https://wiki.squid-cache.org/Features/SslBump
> > https://wiki.squid-cache.org/SquidFaq/SquidAcl
> >
> > I also don't want to cache due to flash drive issues. Is this possible?
> >
>
>  From the documentation of the SSL-Bump settings:
>   
> "
>dynamic_cert_mem_cache_size=SIZE
>  Approximate total RAM size spent on cached generated
>  certificates. If set to zero, caching is disabled. The
>  default value is 4MB.
> "
>
> > Its the same cert in /root/ and /certs/ before anyone points it out.
> >
> > Nothing has been appearing in the log files either but this is no
> > surprise.
> >
> > Been up till 1am last few nights on this so you assistance is very
> > appreciated.
>
> That sounds like you are having a problem. But I don't see any mention
> of what that is exactly.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cannot set pid_filename in an include

2017-11-16 Thread Amos Jeffries

On 16/11/17 20:26, Vieri wrote:

Hi,

Correct me if I'm wrong, but this may be a parsing bug:

# /etc/init.d/squid.test start
* /etc/squid/squid.test.conf must set pid_filename to
*/run/squid.test.pid


However, I have:

# grep include /etc/squid/squid.test.conf
include /etc/squid/squid.custom.test
include /etc/squid/squid.custom.rules.test


# grep pid_filename /etc/squid/squid.custom.test
pid_filename /run/squid.test.pid

Squid Object Cache: Version 3.5.27-20171101-re69e56c



Works for me:

/squid/sbin/squid-3.5 -k parse -f /squid/test_pidfinc.conf
2017/11/16 21:18:47| Startup: Initializing Authentication Schemes ...
2017/11/16 21:18:47| Startup: Initialized Authentication Scheme 'basic'
2017/11/16 21:18:47| Startup: Initialized Authentication Scheme 'digest'
2017/11/16 21:18:47| Startup: Initialized Authentication Scheme 'negotiate'
2017/11/16 21:18:47| Startup: Initialized Authentication Scheme 'ntlm'
2017/11/16 21:18:47| Startup: Initialized Authentication.
2017/11/16 21:18:47| Processing Configuration File: 
/squid/test_pidfinc.conf (depth 0)

2017/11/16 21:18:47| Processing: include /squid/foo_pidf_nc.conf
2017/11/16 21:18:47| Processing Configuration File: 
/squid/foo_pidf_nc.conf (depth 1)

2017/11/16 21:18:47| Processing: pid_filename /squid/foo_custom.id


Note how the complaint is coming from your init script, not Squid.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-16 Thread Vieri


From: Amos Jeffries 
>
>> The following works:
>> 
>> acl denied_useragent browser Chrome
>> acl denied_useragent browser MSIE
>> acl denied_useragent browser Opera
>> acl denied_useragent browser Trident
>> [...]
>> http_access deny denied_useragent
>> http_reply_access deny denied_useragent
>> deny_info 
>> http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent
>>  denied_useragent
>> 
>> The following works for HTTP sites, but not for HTTPS sites in an ssl-bumped 
>> setup:
>> 
>> acl allowed_useragent browser Firefox/
>> [...]
>> http_access deny !allowed_useragent
>> deny_info 
>> http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent
>>  allowed_useragent
>>
> The User-Agent along with all HTTP layer details in HTTPS are hidden 
> behind the encryption layer. TO do anything with them you must decrypt 
> the traffic first. If you can decrypt it turns into regular HTTP traffic 
> - the normal access controls should then work as-is.


So why does my first example actually work even for https sites?

acl denied_useragent browser Chrome
acl denied_useragent browser MSIE
acl denied_useragent browser Opera
acl denied_useragent browser Trident
[...]
http_access deny denied_useragent
http_reply_access deny denied_useragent
deny_info 
http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent
 denied_useragent

If the above "works" then another way would be to use a negated regular 
expression such as:
acl denied_useragent browser (?!Firefox)
but I don't think it's allowed.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cannot set pid_filename in an include

2017-11-16 Thread Vieri

From: Amos Jeffries 
>
> Note how the complaint is coming from your init script, not Squid.


{Thanks,Sorry} again.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-16 Thread Vieri
Let me rephrase my previous question "So why does my first example actually 
work even for https sites?" to "So why does my first example actually work even 
for https sites in an ssl-bumped setup (the same as in example 2)?"
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-16 Thread Amos Jeffries

On 16/11/17 21:29, Vieri wrote:



From: Amos Jeffries 



The following works:

acl denied_useragent browser Chrome
acl denied_useragent browser MSIE
acl denied_useragent browser Opera
acl denied_useragent browser Trident
[...]
http_access deny denied_useragent
http_reply_access deny denied_useragent
deny_info 
http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent
 denied_useragent

The following works for HTTP sites, but not for HTTPS sites in an ssl-bumped 
setup:

acl allowed_useragent browser Firefox/
[...]
http_access deny !allowed_useragent
deny_info 
http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=allowed_useragent
 allowed_useragent


The User-Agent along with all HTTP layer details in HTTPS are hidden
behind the encryption layer. TO do anything with them you must decrypt
the traffic first. If you can decrypt it turns into regular HTTP traffic
- the normal access controls should then work as-is.



So why does my first example actually work even for https sites?


If you are decrypting the traffic, then it works as I said exactly the 
same as for HTTP messages.


If you are not decrypting the traffic, but receiving forward-proxy 
traffic then you are probably blocking the CONNECT messages that setup 
tunnels for HTTPS - it has a User-Agent header *if* it was generated by 
a UA instead of an intermediary like Squid.




acl denied_useragent browser Chrome
acl denied_useragent browser MSIE
acl denied_useragent browser Opera
acl denied_useragent browser Trident
[...]
http_access deny denied_useragent
http_reply_access deny denied_useragent
deny_info 
http://proxy-server1/proxy-error/?a=%a&B=%B&e=%e&E=%E&H=%H&i=%i&M=%M&o=%o&R=%R&T=%T&U=%U&u=%u&w=%w&x=%x&acl=denied_useragent
 denied_useragent

If the above "works" then another way would be to use a negated regular 
expression such as:
acl denied_useragent browser (?!Firefox)
but I don't think it's allowed.


AFAIK that feature is part of a different regex grammar than the one 
Squid uses.


PS. you do know the UA strings of modern browsers all reference each 
other right?  "Chrome like-Gecko like Firefox" etc.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump for regex URL comparison

2017-11-16 Thread Matus UHLAR - fantomas

On 16.11.17 08:21, Joe Foster wrote:

The problem is the connections are not getting through. It just acts like
there is no WiFi connection.


what exactly is the error? Does squid receive those connections?
does squid reject them?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
We are but packets in the Internet of life (userfriendly.org)
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Need help

2017-11-16 Thread Vayalpadu, Vedavyas
Hi All,

Iam getting this error in /var/log/messages.

Nov 16 10:17:20 dkbavlpxpxy01 squid[91497]:
Failed to select source for 
'https://dkbavwpato02.global.internal.carlsberggroup.com/SES/services/masterdata/administratorServices-1.0.wsdl'

And customer is not able to connect to the application.

External App <-> Proxy <-> Internal application

Can anyone help ??

Best regards,

Vyas  (vedavyas vayalpadu )
IBM-AIX-UNIX Support
vedavyas.vayalp...@accenture.com

[cid:image001.jpg@01D311DA.F66233D0]
Accenture BDC-10B
Bagmane World Technology Center, 6,
Service Road, Chinappa Layout, Mahadevapura,
Bengaluru, Karnataka 560048 - INDIA




This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need help

2017-11-16 Thread Matus UHLAR - fantomas

On 16.11.17 09:42, Vayalpadu, Vedavyas wrote:

Nov 16 10:17:20 dkbavlpxpxy01 squid[91497]:
Failed to select source for 
'https://dkbavwpato02.global.internal.carlsberggroup.com/SES/services/masterdata/administratorServices-1.0.wsdl'

And customer is not able to connect to the application.

External App <-> Proxy <-> Internal application


have you played with always_direct and never_direct?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I'm not interested in your website anymore.
If you need cookies, bake them yourself.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-16 Thread Vieri


From: Amos Jeffries 
>
> If you are decrypting the traffic, then it works as I said exactly the 
> same as for HTTP messages.
>
> If you are not decrypting the traffic, but receiving forward-proxy 
> traffic then you are probably blocking the CONNECT messages that setup 
> tunnels for HTTPS - it has a User-Agent header *if* it was generated by 
> a UA instead of an intermediary like Squid.


So I would need to allow CONNECT messages.
Something like:
http_access allow CONNECT allowed_useragent

Anyway, I'm not sure what "decrypting the traffic" implies. If I want an 
ssl-bumped setup to fully handle all HTTPS connections, and be able to detect 
the user-agent on https connections, how should I configure Squid? Should I 
allow all CONNECT messages?

> AFAIK that feature is part of a different regex grammar than the one 
> Squid uses.


I think I read something about Squid being built with a user-defined regex 
grammar/lib. Anyway, I take it it's not feasible for now.
> PS. you do know the UA strings of modern browsers all reference each 
> other right?  "Chrome like-Gecko like Firefox" etc.


Yes, but... We require IE for some Intranet apps, and Firefox for other 
Extranet apps.
We can set a custom user agent string for the Firefox browser. We also have 
other http user agents with customized UA strings. So we're 99% sure that all 
browser clients going through Squid will be tagged correctly. That's the reason 
why I would prefer to "deny all user agents" except one ("my custom UA 
string"). Most users will not try to tamper with this.
I do not want to "allow all except a list of substrings" because it would be a 
nightmare.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External] Re: Need help

2017-11-16 Thread Vayalpadu, Vedavyas
Hello uhlar,

No , I am bit new to squid proxy server, we have taken a TCP dump from the 
system and we see that.

1. From external application to proxy server the traffic is flowing, but from 
Proxy server to the internal application server traffic is not flowing.
2. But from Proxy server to the internal application, trace route and telnet is 
happening.


Regards
Vyas


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Matus UHLAR - fantomas
Sent: Thursday, November 16, 2017 4:09 PM
To: squid-users@lists.squid-cache.org
Subject: [External] Re: [squid-users] Need help

On 16.11.17 09:42, Vayalpadu, Vedavyas wrote:
>Nov 16 10:17:20 dkbavlpxpxy01 squid[91497]:
>Failed to select source for 
>'https://urldefense.proofpoint.com/v2/url?u=https-3A__dkbavwpato02.global.internal.carlsberggroup.com_SES_services_masterdata_administratorServices-2D1.0.wsdl&d=DwIGaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=tFxAuERmcRdMDY2ODYAvl6bEao1jdCMqbJq7uebMlVg&m=ZaZkTWTnQj0-Uep5Zf0yP_MVIEN1gfxZ_I9NX9Xk4cc&s=7SR_061NskZJRLA7JY-3UKcudzhlUJ8zM2jKpZcZzJo&e='
>
>And customer is not able to connect to the application.
>
>External App <-> Proxy <-> Internal application

have you played with always_direct and never_direct?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.fantomas.sk_&d=DwIGaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=tFxAuERmcRdMDY2ODYAvl6bEao1jdCMqbJq7uebMlVg&m=ZaZkTWTnQj0-Uep5Zf0yP_MVIEN1gfxZ_I9NX9Xk4cc&s=vku4_zNZHIZRL1GK-bXjDxTx1e-4Ra2rrNxS6AkBBgA&e=
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I'm not interested in your website anymore.
If you need cookies, bake them yourself.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.squid-2Dcache.org_listinfo_squid-2Dusers&d=DwIGaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=tFxAuERmcRdMDY2ODYAvl6bEao1jdCMqbJq7uebMlVg&m=ZaZkTWTnQj0-Uep5Zf0yP_MVIEN1gfxZ_I9NX9Xk4cc&s=0G7PG9nHGau1SEPMMLPgeP2yZFUSm7lxEoZhWaPyHt0&e=



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to apply squid delay pools on users/groups from AD ?

2017-11-16 Thread Bike dernikov1
Thanks for info, we searched for solution  but found that is not
possible to combine delay polls, and forum is our last hope, so far we
solved almost everything :)
We have: Squid Object Cache: Version 3.5.23, so it could  work.
Can you give us example, how to use it.  Colleague searched for
example but couldn't find it.
Thanks for help.

On Thu, Nov 16, 2017 at 9:02 AM, Amos Jeffries  wrote:
> On 16/11/17 01:43, Bike dernikov1 wrote:
>>
>> Hi,
>> this is my second topic, i wouldn't wan to mix with first. I hope that is
>> ok.
>> i hope that someone succeeded  to apply delay pools on users/groups from
>> AD.
>> We are now using  delay pool  on whole 10.0.0.0/8, but that is a
>> problem as different users have different requirements.   We have 30
>> locations, and we can set different rules by ip, but than we would
>> need one rule for one location, we would need to use static ip,
>> network reconfiguration, but that solution would be nightmare for
>> administration, and we would like to avoid static ip-s for users.
>
>
> It depends on your Squid version.
>
> The latest Squid with annotation support are capable of receiving user/group
> names from the auth and external ACL helpers. These get attached to the
> transaction and can be matched with the 'note' type ACL in any later
> 'fast-category' access controls like delay_pools.
>
> If your Squid is too old to use note ACL, or your helper(s) not providing
> the relevant details to Squid (in Squid-3.4+ helper syntax). Then no, sorry.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-16 Thread Bike dernikov1
On Thu, Nov 16, 2017 at 8:58 AM, Amos Jeffries  wrote:
> On 16/11/17 01:32, Bike dernikov1 wrote:
>>
>>
>> If i can ask under same title:
>> Yesterday we had error in logs: syslog, cache.log, dmesg,access.log
>>
>> segfault at 8 ip ... sp . error 4 is squid
>> process pid exited due to signal 11 with status 0
>>
>> Squid restarted,  that was at the end of work, and i didn't  notice
>> change while surfing.
>> I noticed change in used memory, after i went trough logs, and found
>> segfault.
>>
>> Can you point me, how to analyze what happened.
>> Can that be problem with kernel ?
>>
>
> How to retrieve info about these type of things is detailed at
> .

I wasn't sure it is bug, so i didn't want to post it that is a  bug.
As you now confirm that it can be bug i will prepare for retriving
infos.
I just hope that bug won't  happen at high  load in middle of working day.


> NP: If you do not have core files enabled, then the data from that segfault
> is probably gone irretrievably. You may need to use the script to capture
> segfault details from a running proxy (the 'minimal downtime' section).

I am sure that i didn't enabled it.

>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

Thanks for help.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Proxy does not send response for internal host

2017-11-16 Thread tappdint
I was able to get the proxy to work properly with the original settings I
posted. The issue was with the docker network. There were multiple networks
and the squid container ran on a separate network rather than the network
where all the containers were operating. To fix the issue I simply ran squid
with an extra flag (--network) and everything seems to be working fine now.
Thanks!



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-16 Thread Alex Rousskov
On 11/16/2017 01:44 AM, Vieri wrote:
> Let me rephrase my previous question "So why does my first example
> actually work even for https sites?" to "So why does my first example
> actually work even for https sites in an ssl-bumped setup (the same
> as in example 2)?"

AFAICT, there is not enough information to answer that or the original
question. Going forward, I recommend two steps:

1. Your "works" and "does not work" setups currently differ in at least
three variables: user agent name, slash after the user agent name, and
acl negation in http_access. Find out which single variable is
responsible for the breakage by eliminating all other differences.

2. Post two ALL,2 cache.logs, each containing a single transaction, one
for the "works" case and one for the "does not work" case polished as
discussed in #1.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny_info

2017-11-16 Thread Alex Rousskov
On 11/16/2017 12:52 AM, Vieri wrote:
> From: Amos Jeffries 
>> Because there are actually no custom deny_info attached to that 
>> "denied_restricted1_mimetypes_rep" ACL.


> Right. I don't know how I missed that. Sorry.


FWIW, I recommend avoiding "denied", "allowed", and similar prefixes in
ACL names because these prefixes clash with directive actions. ACLs
(names should) characterize transactions, not actions that Squid should
apply to those transactions. Polishing your names may simplify your
configuration, which may help avoid misconfiguration and/or confusion like

http_access allow denied_foo

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Slow speedtest results

2017-11-16 Thread Evan Pierce

Hi all

Any idea why when using www.speedtest.net on my squid proxy ( squid 
3.5.27 on Centos 6.9) gives consistently false/bad speeds while doing a 
speed test. The actual speed when downloading a file from a actual web 
server like say the microsoft website is consistently good (30Mb/s fiber 
- download speed 3.4MB/s) but a speed test done at the same time sits at 
around 3 to 4Mb/s. I have tried turning caching off and various other 
"tuning" settings on squid but nothing has fundamentally altered the 
speed. Running command line speedtest gives a correct speedtest from the 
squid host. Test machine was machine running firefox and chrome with the 
proxy statically configured and wasn't under any load. A similarly 
configured squid on smaller hardware and the same service provider runs 
consistently gives an accurate speedtest (same centos and squid 
versions). Any one have any ideas?


thanks

Evan

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Deny ports to users

2017-11-16 Thread Jonathan thomas Cho
Hello, I was curious how to restrict users from accessing ports .

I have 4 workers and need them to have their own ports and not able to use
the other 3.

I currently use :

http_port 3128 name=ip2
http_port 3129 name=ip3
http_port 3130 name=ip4

acl ip2 myip x.x.x.2
acl ip3 myip x.x.x.3
acl ip4 myip x.x.x.4
tcp_outgoing_address x.x.x.2 ip2
tcp_outgoing_address x.x.x.3 ip3
tcp_outgoing_address x.x.x.4 ip4

However 3129 still work on all 4 ports.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Deny ports to users

2017-11-16 Thread Yuri
You choose not appropriate tool for you task.

Squid is a proxy, not a firewall.


17.11.2017 1:40, Jonathan thomas Cho пишет:
> Hello, I was curious how to restrict users from accessing ports . 
>
> I have 4 workers and need them to have their own ports and not able to
> use the other 3.  
>
> I currently use :
>
> http_port 3128 name=ip2
> http_port 3129 name=ip3
> http_port 3130 name=ip4
>
> acl ip2 myip x.x.x.2
> acl ip3 myip x.x.x.3
> acl ip4 myip x.x.x.4
> tcp_outgoing_address x.x.x.2 ip2
> tcp_outgoing_address x.x.x.3 ip3
> tcp_outgoing_address x.x.x.4 ip4
>
> However 3129 still work on all 4 ports.
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
**
* C++: Bug to the future *
**



signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slow speedtest results

2017-11-16 Thread Antony Stone
On Thursday 16 November 2017 at 19:18:15, Evan Pierce wrote:

> Hi all
> 
> Any idea why when using www.speedtest.net on my squid proxy ( squid
> 3.5.27 on Centos 6.9) gives consistently false/bad speeds while doing a
> speed test.

> A similarly configured squid on smaller hardware and the same service
> provider runs consistently gives an accurate speedtest (same centos and
> squid versions).

Please explain in more detail what the difference is between these two:

 - what hardware?

 - what does "similarly configured" mean - what are the differences?

 - what are the differences in reported results?

Oh, and just to be sure - are these both on the same connection, or on two 
different connections to the same provider?


Antony.

-- 
I just got a new mobile phone, and I called it Titanic.  It's already syncing.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slow speedtest results

2017-11-16 Thread Alex Rousskov
On 11/16/2017 12:18 PM, Evan Pierce wrote:

> Any idea why when using www.speedtest.net on my squid proxy ( squid
> 3.5.27 on Centos 6.9) gives consistently false/bad speeds while doing a
> speed test. The actual speed when downloading a file from a actual web
> server like say the microsoft website is consistently good (30Mb/s fiber
> - download speed 3.4MB/s) but a speed test done at the same time sits at
> around 3 to 4Mb/s. I have tried turning caching off and various other
> "tuning" settings on squid but nothing has fundamentally altered the
> speed. Running command line speedtest gives a correct speedtest from the
> squid host. Test machine was machine running firefox and chrome with the
> proxy statically configured and wasn't under any load. A similarly
> configured squid on smaller hardware and the same service provider runs
> consistently gives an accurate speedtest (same centos and squid
> versions). Any one have any ideas?

I trust you have checked cache.log, system log, and network interface
statistics for warnings, errors, and red flags unique to the non-working
use case.

Make sure that browser-proxy path is about the same in all tests you
compare. The problem might be related to browser-Squid communication.

Since you have a "working" case (on "smaller hardware"), I would try the
following using identical Squid versions:

1. Use the default Squid configuration with Squid memory caching
disabled on both boxes. Is one setup still a lot "slower" than the other?

2. Compare access.logs and mgr:info output of the two tests (one test
performed after a clean Squid start). Any unexpected differences?

3. If you have not already, test a Squid configuration identical to that
"working" case (you can rename directories/hostnames if really needed,
of course, but do not change anything you do not have to change). Is one
setup still a lot "slower" than the other?

4. Comparing cache.logs of virtually identically configured Squids with
debug_options set to ALL,3 or higher may expose the critical difference.
Debugging will slow Squid down a lot, of course, but perhaps you will
see that one of the Squids is doing something that the other one does
not do.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slow speedtest results

2017-11-16 Thread Evan Pierce

On 2017/11/16 10:55 PM, Alex Rousskov wrote:

On 11/16/2017 12:18 PM, Evan Pierce wrote:


Any idea why when using www.speedtest.net on my squid proxy ( squid
3.5.27 on Centos 6.9) gives consistently false/bad speeds while doing a
speed test. The actual speed when downloading a file from a actual web
server like say the microsoft website is consistently good (30Mb/s fiber
- download speed 3.4MB/s) but a speed test done at the same time sits at
around 3 to 4Mb/s. I have tried turning caching off and various other
"tuning" settings on squid but nothing has fundamentally altered the
speed. Running command line speedtest gives a correct speedtest from the
squid host. Test machine was machine running firefox and chrome with the
proxy statically configured and wasn't under any load. A similarly
configured squid on smaller hardware and the same service provider runs
consistently gives an accurate speedtest (same centos and squid
versions). Any one have any ideas?

I trust you have checked cache.log, system log, and network interface
statistics for warnings, errors, and red flags unique to the non-working
use case.

Yes ... no obvious "red flags"

Make sure that browser-proxy path is about the same in all tests you
compare. The problem might be related to browser-Squid communication.
In both cases the test browser machines are physically cabled in to the 
same gigabit switch as the squid proxy/firewall machine



Since you have a "working" case (on "smaller hardware"), I would try the
following using identical Squid versions:

1. Use the default Squid configuration with Squid memory caching
disabled on both boxes. Is one setup still a lot "slower" than the other?


Yes, however the "bigger" site has more vlans so it has slightly more 
https access lines and acls.


2. Compare access.logs and mgr:info output of the two tests (one test
performed after a clean Squid start). Any unexpected differences?


Nothing jumps out at me.


3. If you have not already, test a Squid configuration identical to that
"working" case (you can rename directories/hostnames if really needed,
of course, but do not change anything you do not have to change). Is one
setup still a lot "slower" than the other?

Yes one is slower.

4. Comparing cache.logs of virtually identically configured Squids with
debug_options set to ALL,3 or higher may expose the critical difference.
Debugging will slow Squid down a lot, of course, but perhaps you will
see that one of the Squids is doing something that the other one does
not do.

I can't see anything but both are in production and being used while I 
was testing so generated a lot of data



HTH,

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slow speedtest results

2017-11-16 Thread Alex Rousskov
On 11/16/2017 02:53 PM, Evan Pierce wrote:
> I can't see anything but both are in production and being used while I
> was testing so generated a lot of data

Sorry, I did not realize you are using live Squids for these tests!
Combining real and test traffic makes triage a lot harder and pretty
much all of the tests I mentioned are nearly pointless on live Squids,
especially if those Squids handle substantially different traffic
streams. If there is no way to take these Squids offline for a test then

a) Consider starting a separate Squid instance (on each box) using the
otherwise default config with no memory cache and a dedicated http_port.

b) Consider fixing your overall architecture so that it becomes possible
to take any single Squid instance offline when needed without serious
effects on users.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] CONNECT + custom data

2017-11-16 Thread Richard Peeters
Hi All,

I have a requirement to forward proxy an opaque stream of data. One of
the servers (acting as a client -A- to SQUID ) will use the CONNECT
method to connect to SQUID (on server B) and squid will then proxy
this data for A.

My question is I want to pass metadata from A to B which B will strip
out before proxying the data outbound, and I cannot find a way to do
that.

If this was an HTTP stream, headers could have been added by A and B
could have stripped them, but with my case I dont think even content
adaptation will help.

Can someone please advise on what feature of SQUID I should be looking
at to achieve this ot whether it is possible or not.

I have been reading documentation for less than 24 hours, please
pardon my ignorance.

Thanks,
Rich
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to apply squid delay pools on users/groups from AD ?

2017-11-16 Thread Amos Jeffries

On 17/11/17 03:40, Bike dernikov1 wrote:

Thanks for info, we searched for solution  but found that is not
possible to combine delay polls, and forum is our last hope, so far we
solved almost everything :)
We have: Squid Object Cache: Version 3.5.23, so it could  work.
Can you give us example, how to use it.  Colleague searched for
example but couldn't find it.
Thanks for help.



An example for username would be:

 auth_param ...
 acl login proxy_auth REQUIRED
 http_access deny !login

 delay_pools 1
 delay_class 1 ...
 delay_parameters 1 ...

 acl slow note user Fred Bob
 delay_access 1 allow slow


For groups, the latest Kerberos auth helpers from Marcus Moeller are 
sending the SID and group details back to Squid for this. The other 
helpers bundled by Squid are not yet sending group names back.


(I was hoping to have that ready for Squid-4, but have not had the time. 
Patches or github PR welcome if anyone wants to contribute).


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External] Re: Need help

2017-11-16 Thread Amos Jeffries

On 17/11/17 02:08, Vayalpadu, Vedavyas wrote:

Hello uhlar,

No , I am bit new to squid proxy server, we have taken a TCP dump from the 
system and we see that.

1. From external application to proxy server the traffic is flowing, but from 
Proxy server to the internal application server traffic is not flowing.
2. But from Proxy server to the internal application, trace route and telnet is 
happening.



That log entry means that Squid was not able to locate any server IP 
addresses that will work for that transaction.


Either DNS has no results, client original dst-IP is not available, or 
the servers Squid does find are a) forbidden for use or b) currently 
DOWN according to the dynamic availability checks (ICMP echo, ICP query 
for cache_peer, and past TCP connection attempts).


The cache.log lines immediately following the one you quoted tell the 
results from DNS and each of those access controls so you can see what 
the reason for the failure was.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID memory error after vm.swappines changed from 60 to 10

2017-11-16 Thread Amos Jeffries

On 17/11/17 03:49, Bike dernikov1 wrote:

On Thu, Nov 16, 2017 at 8:58 AM, Amos Jeffries wrote:

On 16/11/17 01:32, Bike dernikov1 wrote:



If i can ask under same title:
Yesterday we had error in logs: syslog, cache.log, dmesg,access.log

segfault at 8 ip ... sp . error 4 is squid
process pid exited due to signal 11 with status 0

Squid restarted,  that was at the end of work, and i didn't  notice
change while surfing.
I noticed change in used memory, after i went trough logs, and found
segfault.

Can you point me, how to analyze what happened.
Can that be problem with kernel ?



How to retrieve info about these type of things is detailed at
.


I wasn't sure it is bug, so i didn't want to post it that is a  bug.
As you now confirm that it can be bug i will prepare for retriving
infos.
I just hope that bug won't  happen at high  load in middle of working day.



The how-to are just on that page because if you are reporting that kind 
of bug those details are mandatory. You dont have to be reporting a bug 
to use the techniques.


That said, segfault is almost always a bug. Though it could be a bug in 
the system environment or hardware rather than Squid. The details you 
get from looking at the traces should indicate whether those are actual 
or not.






NP: If you do not have core files enabled, then the data from that segfault
is probably gone irretrievably. You may need to use the script to capture
segfault details from a running proxy (the 'minimal downtime' section).


I am sure that i didn't enabled it.



Okay, then you will need to for further diagnosis.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Proxy does not send response for internal host

2017-11-16 Thread Amos Jeffries

On 17/11/17 03:57, tappdint wrote:

I was able to get the proxy to work properly with the original settings I
posted. The issue was with the docker network. There were multiple networks
and the squid container ran on a separate network rather than the network
where all the containers were operating. To fix the issue I simply ran squid
with an extra flag (--network) and everything seems to be working fine now.
Thanks!



Cool. Sounds like you have a very interesting use-case there.

Would you be able to write up the design and configuration settings for 
a page in our wiki?


eg. 


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Deny ports to users

2017-11-16 Thread Amos Jeffries


On 17/11/17 08:42, Yuri wrote:

You choose not appropriate tool for you task.

Squid is a proxy, not a firewall.



Indeed.




17.11.2017 1:40, Jonathan thomas Cho пишет:

Hello, I was curious how to restrict users from accessing ports .

I have 4 workers and need them to have their own ports and not able to 
use the other 3.


I currently use :

http_port 3128 name=ip2
http_port 3129 name=ip3
http_port 3130 name=ip4


The above are directives for the *listening* ports receiving 
client<->Squid connections.


You have here configured this Squid *process* (all workers of it) to use 
port 3128 on all IP addresses the machine has been assigned. Same for 
port 3129 and 3130.


Squid cannot control which port a client decides to connect to. It can 
only listen (or not).


I assume you mean you want each worker to use different listening ports. 
That can be done by using the ${process_number} config macro in the port 
number itself eg. http_port 313${Process_number}.
 However, be aware that will lead to issues with the coordinator 
process not being able to manage SMP port functionality and worker 
automatic restart after crashes will have issues since the process 
number changes there too. And you thus cannot reliably use the port 
name/number for other things like you seem to be wanting.




>> acl ip2 myip x.x.x.2
acl ip3 myip x.x.x.3
acl ip4 myip x.x.x.4


"myip" is deprecated, it does not work at all well. Use "myportname" 
instead.


Your Squid should complain about this when you run '-k parse' to check 
your config validity. If your Squid does not support that new ACL type 
you definitely need to upgrade.




tcp_outgoing_address x.x.x.2 ip2
tcp_outgoing_address x.x.x.3 ip3
tcp_outgoing_address x.x.x.4 ip4



These are for Squid<->server connections. Has nothing to do with 
client<->Squid connections.


The OS selects which ports are use here. Not Squid.



However 3129 still work on all 4 ports.



3129 is a port number. Singular. It does not *listen* on other values.

The traffic arriving on connections *to* there is independent of the 
outgoing connection port numbers - which are not controllable as 
mentioned above. So it is not clear what you are trying to say by that.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CONNECT + custom data

2017-11-16 Thread Amos Jeffries

On 17/11/17 15:09, Richard Peeters wrote:

Hi All,

I have a requirement to forward proxy an opaque stream of data. One of
the servers (acting as a client -A- to SQUID ) will use the CONNECT
method to connect to SQUID (on server B) and squid will then proxy
this data for A.

My question is I want to pass metadata from A to B which B will strip
out before proxying the data outbound, and I cannot find a way to do
that.


"metadata" in HTTP just means headers.

For custom hop-by-hop headers your client application needs to use 
Connection: header to control their removal by the recieving next-hop 
HTTP agent. See .
 The custom header field-values can be accessed using the various 
request/reply header regex ACL types, same as any header.


Squid does not touch any of the 'payload' section following a CONNECT 
message. It always gets relayed as-is or rejected completely.
 Except when SSL-Bump is configured to decrypt tunnelled TLS traffic. 
Custom payload formats are not possible there, only TLS syntax.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.27 . https website

2017-11-16 Thread Amos Jeffries

On 17/11/17 15:32, G~D~Lunatic wrote:

i use squid 3.5.27 as a transparent proxy.


Small correction: You have configured NAT interception proxy with 
SSL-Bump'ing. Not truly transparent.
 There are some vital differences. Most specific to your case is that 
interception proxies do alter the traffic in significant ways (not 
transparently relay as-is).



With the proxy , i access 
some https websites like www.hupu.com. But the 
webpage does not show correctly.  There are some websizes similar such 
as https://www.zhihu.com, https://www.jd.com/. So i want to know where problem is or how to 
deal with it.


The webpage remind like"   s1.hdslb.com used an invalid security 
certificate. This certificate is valid for the following domain names 
only: * .zhaopin.com, * .zhaopin.cn, * .dpfile.com, * .cdn.myqcloud.com, 
* .sogoucdn. SSL error code: SSL_ERROR_BAD_CERT_DOMAIN  "


how can i send a screenshot to explain?
Here is my configure
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
http_access allow all


*Extremely* unsafe configuration. This proxy is now an "open proxy". 
Anybody can abuse it for any use whatsoever.


Combined with how you have disabled below recording of all TLS traffic 
problems (and thus hacking attempts) and do server-first bumping of 
clients what you end up with is a remarkably dangerous piece of software 
whose most useful property is being a way to attack your network. :-(






# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
acl NCACHE method GET
no_cache deny NCACHE


"no_cache" is an deprecated directive. It was removed because it 
confused people. Delete the "no_" prefix.



Also, most other methods are not cacheable. So why not do it the simple way?

 cache deny all
or
 store_miss deny all




# And finally deny all other access to this proxy
request_header_access Via deny all #hide squid header
request_header_access X-Forwarded-For deny all #hide squid header
#request_timeout 2 minutes #client request timeout



The above is a very slow and nasty way to perform:

 via off
 forwarded_for delete


Though if you want to be transparent, use these instead:
 via off
 forwarded_for transparent



# Squid normally listens to port 3128
http_port 3120

http_port 3128 intercept

https_port 192.168.51.115:3129 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 
cert=/usr/local/squid/ssl_cert/myCA.pem 
key=/usr/local/squid/ssl_cert/myCA.pem

always_direct allow all


The use of "always_direct allow all" is a now useless workaround for a 
long ago fixed bug. No version of Squid available in any distro today 
needs it.




ssl_bump server-first all
acl ssl_step1 at_step SslBump1
acl ssl_step2 at_step SslBump2
acl ssl_step3 at_step SslBump3
ssl_bump peek ssl_step1
ssl_bump splice all


You are mixing up rules from multiple different versions of the SSL-Bump 
feature.


"server-first" is equivalent to:

 ssl_bump peek ssl_step1
 ssl_bump bump all

It overrides all the ssl_bump lines following it.




sslproxy_version 0
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER


Remove all three of the above lines. You may then be able to see what is 
going on if the errors are in the TLS layer.


All thes

[squid-users] forward proxy to reverse proxy to app

2017-11-16 Thread Bernhard Dübi
Hi,

I try to configure squid for a very special usecase but can't get it
to work. So, if you could give me some hints on how to do it right,
that would be great

Here's what I try to achieve:

the browser has proxy:8080 configured as manual proxy
from the browser I access some websites
when the request is plain http then the reply must be a redirect to https
when the request is https then the ssl connection must be termintaed
on the proxy and the request must be forwarded as http to the
application server

I know, I could just forget about ssl an go directly the app server
with http bt the customer insists on that particular setup

we use several domains like app1.doma.com, app2.domb.biz, app3.domc.org
in order to return the correct certificate for each request, I need a
dedicated ip:port combination for each certificate

I came up with the following setup

browser -> proxy:8080 -> squidfor http://app1.doma.com ->
127.0.0.1:10081 -> haproxy -> redirect
for https://app1.doma.com -> 127.0.0.1:10401 ->
haproxy -> terminate ssl -> app1.local.net:8123
for http://app2.doma.com -> 127.0.0.1:10082 -> haproxy
-> redirect
for https://app2.doma.com -> 127.0.0.1:10402 ->
haproxy -> terminate ssl -> app2.local.net:8765
for http://app3.doma.com -> 127.0.0.1:10083 -> haproxy
-> redirect
for https://app3.doma.com -> 127.0.0.1:10403 ->
haproxy -> terminate ssl -> app3.local.net:

here's the configuration I created so far

http_port 8080

# User networks
acl Users src 10.11.12.0/22

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow Users
http_access deny all
coredump_dir /var/spool/squid
cache deny all
never_direct allow all

acl to_domA dstdomain .doma.com
acl to_domB dstdomain .domb.biz
acl to_domC dstdomain .domc.org

cache_peer 127.0.0.1 parent 10081 0 name=domA_redirect no-query originserver
cache_peer_access domA_redirect allow !CONNECT to_domA
cache_peer 127.0.0.1 parent 10401 0 name=domA_ssl no-query originserver
cache_peer_access domA_ssl allow CONNECT to_domA

cache_peer 127.0.0.1 parent 10082 0 name=domB_redirect no-query originserver
cache_peer_access domB_redirect allow !CONNECT to_domB
cache_peer 127.0.0.1 parent 10402 0 name=domB_ssl no-query originserver
cache_peer_access domB_ssl allow CONNECT to_domB

cache_peer 127.0.0.1 parent 10083 0 name=domC_redirect no-query originserver
cache_peer_access domC_redirect allow !CONNECT to_domC
cache_peer 127.0.0.1 parent 10403 0 name=domC_ssl no-query originserver
cache_peer_access domC_ssl allow CONNECT to_domC



the plain http part works, squid selects the correct peer and haproxy
reponds with the redirect

ssl respectifely the CONNECT call is the problem

2017/11/17 07:56:21.429 kid1| 28,3| Checklist.cc(63) markFinished:
0x55d69a951b68 answer ALLOWED for match
2017/11/17 07:56:21.429 kid1| 28,3| Checklist.cc(163) checkCallback:
ACLChecklist::checkCallback: 0x55d69a951b68 answer=ALLOWED
2017/11/17 07:56:21.429 kid1| 44,3| peer_select.cc(171)
peerCheckNeverDirectDone: peerCheckNeverDirectDone: ALLOWED
2017/11/17 07:56:21.429 kid1| 44,3| peer_select.cc(177)
peerCheckNeverDirectDone: direct = DIRECT_NO (never_direct allow)
2017/11/17 07:56:21.429 kid1| 44,3| peer_select.cc(441) peerSelectFoo:
CONNECT app1.doma.com
2017/11/17 07:56:21.429 kid1| 44,3| peer_select.cc(685)
peerGetSomeParent: CONNECT app1.doma.com
2017/11/17 07:56:21.429 kid1| 44,2| peer_select.cc(280)
peerSelectDnsPaths: Failed to select source for 'app1.doma.com:443'
2017/11/17 07:56:21.429 kid1| 44,2| peer_select.cc(281)
peerSelectDnsPaths:   always_direct = DENIED
2017/11/17 07:56:21.429 kid1| 44,2| peer_select.cc(282)
peerSelectDnsPaths:never_direct = ALLOWED
2017/11/17 07:56:21.429 kid1| 44,2| peer_select.cc(295)
peerSelectDnsPaths:timedout = 0
2017/11/17 07:56:21.429 kid1| 26,3| tunnel.cc(1156)
tunnelPeerSelectComplete: No paths found. Aborting CONNECT
2017/11/17 07:56:21.429 kid1| 4,3| errorpage.cc(633) errorSend:
local=10.1.2.3:8080 remote=10.11.12.13:61110 FD 12 flags=1,
err=0x55d69a511528
2017/11/17 07:56:21.429 kid1| 4,2| errorpage.cc(1262) BuildContent: No
existing error page language negotiated for ERR_CANNOT_FORWARD. Using
default error file.


if it makes any difference here some details about os and squid:

root@proj-proxy:~# dpkg -l | grep squid
ii  squid 3.5.12-1ubuntu7.4

Re: [squid-users] [External] Re: Need help

2017-11-16 Thread Vayalpadu, Vedavyas
Hello All,

Thanks for your help, we have resolved the issue once replaced the Old IP with 
the New IP under "cache_peer" in squid.conf file.


Regards
Vyas


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Friday, November 17, 2017 8:19 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] [External] Re: Need help

On 17/11/17 02:08, Vayalpadu, Vedavyas wrote:
> Hello uhlar,
>
> No , I am bit new to squid proxy server, we have taken a TCP dump from the 
> system and we see that.
>
> 1. From external application to proxy server the traffic is flowing, but from 
> Proxy server to the internal application server traffic is not flowing.
> 2. But from Proxy server to the internal application, trace route and telnet 
> is happening.
>

That log entry means that Squid was not able to locate any server IP addresses 
that will work for that transaction.

Either DNS has no results, client original dst-IP is not available, or the 
servers Squid does find are a) forbidden for use or b) currently DOWN according 
to the dynamic availability checks (ICMP echo, ICP query for cache_peer, and 
past TCP connection attempts).

The cache.log lines immediately following the one you quoted tell the results 
from DNS and each of those access controls so you can see what the reason for 
the failure was.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.squid-2Dcache.org_listinfo_squid-2Dusers&d=DwIGaQ&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=tFxAuERmcRdMDY2ODYAvl6bEao1jdCMqbJq7uebMlVg&m=UsclePwFVCW_nTLv9f9aWlqPevA0YWHfBgMhHkW3UAU&s=xYvC3w5aHxuzbApxX_RUJEXBaJVXcrgBbTaTfXf95wg&e=



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy.
__

www.accenture.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External] Re: Need help

2017-11-16 Thread Amos Jeffries

On 17/11/17 20:49, Vayalpadu, Vedavyas wrote:

Hello All,

Thanks for your help, we have resolved the issue once replaced the Old IP with the New IP 
under "cache_peer" in squid.conf file.



You know that you can place a hostname there right? no need to manually 
configure the IP address.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users