Re: [squid-users] Problems with peek and slice through parent proxy

2018-07-11 Thread Hess, Niklas
Hello again,

Thanks for any help.

It´s an forward Proxy only and my users plan to connect to a cloud in the 
internet.
The parent proxy, that I have to deal with, is not administrated by me or any 
of my colleges. (It´s the ISP proxy)
I can't make any changes to the parent.

The plan is, that the proxy would decrypt the SSL traffic from the Webserver, 
scan it through ClamAV and then send it back to the client, encrypted with the 
Certificate from our internal CA.


 - - - SSL (int) - - -  - - - SSL (ext) - - -  - - - SSL 
(ext) - - - 

Int = internal certificate
Ext = external (original) certificate

And I know my config is a security mess but I will correct that.
Even with your "minimal" config I get: " assertion failed: 
PeerConnector.cc:116: "peer->use_ssl" in the cache.log
And I probably have to add the option "ssl" to the parent in my config, but 
this isn’t too because the parent wouldn´t understand it.

Is there a way of doing that, without touching the parent proxy?

Thanks again for any help!
Niklas


On 12/07/18 01:43, Kedar K wrote:
>
>
> On Wed, Jul 11, 2018 at 7:03 PM Hess, Niklas wrote:
>
> Hello list,
>
> __ __
>
> I´m setting up a Squid proxy specifically to scan the incoming
> traffic from a cloud platform.
>

Please clarify what "incoming" means to you. Is the cloud platform generating 
request messages or supplying the responses?

The HTTP definition of in/out is oriented with request flow. ie from the origin 
server point of view, *opposite* to what an ISP would describe it as.



> ClamAV should scan the incoming traffic.
>
> __ __
>
> So far so good.
>
> __ __

May appear to be, but your displayed config is absolutely *not* "all good". 
Details below.


>
> The cloud uses WebDAV over HTTPS, so I have to SSL-Bump the incoming
> traffic via Peek and Splice Feature.
>

Then do so. The config you show below is not peek+splice.

It is "bump all" which is the old client-first behaviour.


> That works indeed with the CA signed internal Certificate.
>

Ah, if by CA you mean one of the global "Trusted CA". Then you probably should 
not be using SSl-Bump at all. That is a feature for forward/explicit proxy to 
access.

Otherwise if your CA is a self-signed/generated one then of course it "works". 
All SSL-Bump variants use that type of CA certificate.


> __ __
>
> But as soon as I add a cache_peer as a “parent proxy” it does not
> work. (This request could not be forwarded to the origin server or
> to any parent caches.)
>
> I just get “FwdState.cc(813) connectStart: fwdConnectStart: Ssl
> bumped connections through parent proxy are not allowed” in the
> cache.log
>
> __ __
>
> And yes I know ssl-bump through a parent proxy is an security issue
> and might be unsecure, but the connection to the parent is internal,
> save and secure.
>

Don't count on that. You configured an open proxy. Anyone who can open a TCP 
connection to it pretty much has wide-open (and anonymized) access to all your 
LAN internal services.

The decision to do that, even for testing, implies a potential for holes which 
does not bode well.



> I don’t know how, but could there be a way to “comment out” the
> section in fwdConnectStart source file?
>

You cannot comment out a *lack* of something. That is the problem here.
There is no TLS on this peer's connection, so no server-cert exists for Squid 
to copy/forge in what it is sending the client. More on that below.


> __ __
>
> Squid Cache: Version 3.5.27
>
> Service Name: squid
>
> configure options:  '--with-openssl' '--enable-ssl-crtd'
>
> __ __
>
> __ __
>
> Here´s my “minimal” SSL-Bump config:
>
> __ __
>
> ### Start config
>
> __ __
>
> debug_options ALL,6
>
> shutdown_lifetime 1 seconds
>
> __ __
>
> http_port 8080 ssl-bump
> cert=/usr/local/squid/etc/ssl_cert/Squidtest.pem
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>
> __ __
>
> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
> -M 4MB
>
> sslcrtd_children 25 startup=5 idle=10
>
> __ __
>
> cache_peer 10.106.3.66 parent 8080 0 no-query no-digest
> name=parent
>

The connection to this peer is not secured. TLS (thus HTTPS) traffic is 
required to remain secure on outbound connections.
 Note that this does *not* mean it has to remain HTTPS - but it cannot be 
plain-text HTTP like the above peer connection. Currently the only security 
protocol cache_peer supports is TLS/SSL, so the "ssl" (v2.6 -
v3.5) or "tls" options (v4.0+) is required.

Also, the outgoing server cert is what Squid bases its forged serverHello 
certificate on. So there are major problems added when the cache_peer 
certificate is different from the origin servers one.

The ideal way to relay SSL-Bumped traffic between proxies is currently

Re: [squid-users] Delay pools in squid4 not working with https

2018-07-11 Thread Julian Perconti
>> 
>> El ‎martes‎, ‎10‎ de ‎julio‎ de ‎2018‎ ‎18‎:‎57‎:‎43‎ ‎-03, Alex Rousskov 
>>  escribió: 
>> 
>> 
>> On 07/10/2018 01:50 PM, Paolo Marzari wrote:
>>> My home server just updated from 3.5.27, everything is working fine, but
>>> delay pools seems broken to me.
>> 
>>> Revert to 3.5.27 and delays works again with every type of traffic.
>>> 
>>> I think there's something wrong with https traffic.
>> 
>> You are probably right. A few days ago, while working on an unrelated
>> project, we have found a bug in delay pools support for tunneled https
>> traffic. That support was probably broken by v4 commit 6b2b6cf. We have
>> not tested v3.5, so I can only confirm that v4 and v5 are broken.
>> 
>> The bug will be fixed as a side effect of "peering support for SslBump"
>> changes that should be ready for the official review soon. If you would
>> like to test our unofficial branch, the code is available at
>> https://github.com/measurement-factory/squid/tree/SQUID-360-peering-for-SslBump
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users

I can confirm that delay_pools works fine both http and https protocols in 
squid 4 running debian 9 

Squid Cache: Version 4.1 
Service Name: squid 
 
Here the cfg: 
 
delay_pools 1 
delay_class 1 2 

delay_access 1 allow all 
 
delay_parameters 1 -1/-1 10/104857600 # ~100KBs/~100MB 
delay_initial_bucket_level 50

Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails

2018-07-11 Thread Alex Rousskov
On 07/11/2018 04:39 AM, pete dawgg wrote:

> cache_dir aufs /mnt/cache/squid 75000 16 256

> FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cf__metadata.shm): 
> (2) No such file or directory

If you are using a combination of an SMP-unaware disk cache (AUFS) with
SMP features such as multiple workers or a shared memory cache, please
note that this combination is not supported.

The FATAL message above is about a shared memory segment used for
collapsed forwarding. IIRC, Squid v3 attempted to create those segments
even if they were not needed, so I cannot tell for sure whether you are
using an unsupported combination of SMP/non-SMP features.

I can tell you that you cannot use a combination of collapsed
forwarding, AUFS cache_dir, and multiple workers. Also, non-SMP
collapsed forwarding was primarily tested with UFS cache_dirs.


Unfortunately, I cannot answer your question regarding overflowing AUFS
cache directories. One possibility is that Squid is not cleaning up old
cache files fast enough. You already set cache_swap_low/cache_swap_high
aggressively. Does Squid actively remove objects from the full disk
cache when you start it up _without_ any traffic? If not, it could be a
Squid bug. Unfortunately, nobody has worked on AUFS code for years
(AFAIK) so it may be difficult to fix anything that might be broken there.


Cheers,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] store_id_extras to access request header

2018-07-11 Thread Kedar K
On Wed, Jul 11, 2018 at 8:37 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 07/10/2018 11:59 PM, Kedar K wrote:
>
> > I tried to get the request header to store id helper
> > with %>h option for store_id_extras; However, I get a '-'
>
> > store_id_extras "%>h %>a/%>A %un %>rm myip=%la myport=%lp"
>
> > Is this expected behaviour?
>
> No, it is not expected. Consider filing a bug report with Squid bugzilla
> and, if possible, attach an ALL,9 cache.log while reproducing the
> problem with a single wget or curl transaction. Please do not forget to
> specify your Squid version.
>
> If you can reproduce the problem with Squid v4 or v5, please mention
> that as well.

​Thank you Alex; yes it seems a bug; tested with both store_id &
url_rewrite​

​extras. Either of them send blank headers. BTW I am using version 3.5.20
I will test with ALL,9 and report the bug.​

>
>

> > Wouldn't request header be available before
> > sending a query to store-id helper?
>
> Yes, request headers are available at Store ID calculation time.
>
>
> > Is it possible to use combination of store_id_program helper and
> > rewrite_url_program; such that the extra params from the url are used by
> > store-id helper to create a store-id and then the url_rewrite program
> > can strip them off before sending the request to origin server? ​
>
> That plan would not work because the Store ID helper is consulted after
> the URL rewriter:
> https://wiki.squid-cache.org/SquidFaq/OrderIsImportant#Callout_Sequence
>
> ​this makes it clear now

>
> Using custom headers is a much simpler/cleaner solution IMO.
>
> ​Agree​


>
> HTH,
>
> Alex.
>


-- 

- Kedar
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with peek and slice through parent proxy

2018-07-11 Thread Alex Rousskov
On 07/11/2018 09:03 AM, Amos Jeffries wrote:
> IIRC, Measurement Factory have an ongoing project adding ability for
> Squid to generate CONNECT messages which will make cache_peer links work
> better. But even so the second proxy will still have to do its own
> SSL-Bump on the crypted traffic

Correct on all counts. Our unofficial code is available for testing at
https://github.com/measurement-factory/squid/tree/SQUID-360-peering-for-SslBump

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] store_id_extras to access request header

2018-07-11 Thread Alex Rousskov
On 07/10/2018 11:59 PM, Kedar K wrote:

> I tried to get the request header to store id helper
> with %>h option for store_id_extras; However, I get a '-'

> store_id_extras "%>h %>a/%>A %un %>rm myip=%la myport=%lp"

> Is this expected behaviour?

No, it is not expected. Consider filing a bug report with Squid bugzilla
and, if possible, attach an ALL,9 cache.log while reproducing the
problem with a single wget or curl transaction. Please do not forget to
specify your Squid version.

If you can reproduce the problem with Squid v4 or v5, please mention
that as well.


> Wouldn't request header be available before
> sending a query to store-id helper?

Yes, request headers are available at Store ID calculation time.


> Is it possible to use combination of store_id_program helper and
> rewrite_url_program; such that the extra params from the url are used by
> store-id helper to create a store-id and then the url_rewrite program
> can strip them off before sending the request to origin server? ​

That plan would not work because the Store ID helper is consulted after
the URL rewriter:
https://wiki.squid-cache.org/SquidFaq/OrderIsImportant#Callout_Sequence


Using custom headers is a much simpler/cleaner solution IMO.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with peek and slice through parent proxy

2018-07-11 Thread Amos Jeffries
On 12/07/18 01:43, Kedar K wrote:
> 
> 
> On Wed, Jul 11, 2018 at 7:03 PM Hess, Niklas wrote:
> 
> Hello list,
> 
> __ __
> 
> I´m setting up a Squid proxy specifically to scan the incoming
> traffic from a cloud platform.
> 

Please clarify what "incoming" means to you. Is the cloud platform
generating request messages or supplying the responses?

The HTTP definition of in/out is oriented with request flow. ie from the
origin server point of view, *opposite* to what an ISP would describe it as.



> ClamAV should scan the incoming traffic.
> 
> __ __
> 
> So far so good.
> 
> __ __

May appear to be, but your displayed config is absolutely *not* "all
good". Details below.


> 
> The cloud uses WebDAV over HTTPS, so I have to SSL-Bump the incoming
> traffic via Peek and Splice Feature.
> 

Then do so. The config you show below is not peek+splice.

It is "bump all" which is the old client-first behaviour.


> That works indeed with the CA signed internal Certificate.
> 

Ah, if by CA you mean one of the global "Trusted CA". Then you probably
should not be using SSl-Bump at all. That is a feature for
forward/explicit proxy to access.

Otherwise if your CA is a self-signed/generated one then of course it
"works". All SSL-Bump variants use that type of CA certificate.


> __ __
> 
> But as soon as I add a cache_peer as a “parent proxy” it does not
> work. (This request could not be forwarded to the origin server or
> to any parent caches.)
> 
> I just get “FwdState.cc(813) connectStart: fwdConnectStart: Ssl
> bumped connections through parent proxy are not allowed” in the
> cache.log
> 
> __ __
> 
> And yes I know ssl-bump through a parent proxy is an security issue
> and might be unsecure, but the connection to the parent is internal,
> save and secure.
> 

Don't count on that. You configured an open proxy. Anyone who can open a
TCP connection to it pretty much has wide-open (and anonymized) access
to all your LAN internal services.

The decision to do that, even for testing, implies a potential for holes
which does not bode well.



> I don’t know how, but could there be a way to “comment out” the
> section in fwdConnectStart source file?
> 

You cannot comment out a *lack* of something. That is the problem here.
There is no TLS on this peer's connection, so no server-cert exists for
Squid to copy/forge in what it is sending the client. More on that below.


> __ __
> 
> Squid Cache: Version 3.5.27
> 
> Service Name: squid
> 
> configure options:  '--with-openssl' '--enable-ssl-crtd'
> 
> __ __
> 
> __ __
> 
> Here´s my “minimal” SSL-Bump config:
> 
> __ __
> 
> ### Start config
> 
> __ __
> 
> debug_options ALL,6
> 
> shutdown_lifetime 1 seconds
> 
> __ __
> 
> http_port 8080 ssl-bump
> cert=/usr/local/squid/etc/ssl_cert/Squidtest.pem
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> 
> __ __
> 
> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db
> -M 4MB
> 
> sslcrtd_children 25 startup=5 idle=10
> 
> __ __
> 
> cache_peer 10.106.3.66 parent 8080 0 no-query no-digest name=parent
> 

The connection to this peer is not secured. TLS (thus HTTPS) traffic is
required to remain secure on outbound connections.
 Note that this does *not* mean it has to remain HTTPS - but it cannot
be plain-text HTTP like the above peer connection. Currently the only
security protocol cache_peer supports is TLS/SSL, so the "ssl" (v2.6 -
v3.5) or "tls" options (v4.0+) is required.

Also, the outgoing server cert is what Squid bases its forged
serverHello certificate on. So there are major problems added when the
cache_peer certificate is different from the origin servers one.

The ideal way to relay SSL-Bumped traffic between proxies is currently
to let Squid re-encrypt it. Then to repeat the NAT intercept directing
Squids outbound connections into the second proxy.

IIRC, Measurement Factory have an ongoing project adding ability for
Squid to generate CONNECT messages which will make cache_peer links work
better. But even so the second proxy will still have to do its own
SSL-Bump on the crypted traffic, because we *have* to get the origin
server cert parameters through the whole chain to the client for the TLS
to work properly.


> __ __
> 
> never_direct allow all
> 
> __ __
> 
> sslproxy_cert_error allow all
> 
> sslproxy_flags DONT_VERIFY_PEER
> 

Don't do this. Ever. It actively disables all the security which is the
whole point of TLS existence. Rendering any tests which you may do to
check for "working" invalid.

The server/peer could emit random garbage bytes in a syntax layout
resembling a TLS handshake and this Squid would blithely say "okay" and
send it the clients private data. While telling 

Re: [squid-users] Exchange OWA 2016 behind squid

2018-07-11 Thread Amos Jeffries
On 11/07/18 23:50, Mike Surcouf wrote:
> I am sure Amos wont mind me saying but nginx is the right tool for that 
> scenario.

I don't mind the saying, but I disagree. The HTTP behaviour bugs I keep
hearing about NGinX having tend to make other non-Squid proxies /
servers be better when Squid itself is not top of the list.

The only situation I recommend NGinX is when the admin in question
already has a strong preference for using it. eg, being more trouble to
learn something different to solve the problem at hand.



That aside, the trouble with OWA is that it is email / SMTP software
which grew limited HTTP capabilities, and is proprietary so nobody in
our FOSS world actually knows what is intending to do with its messages
and connections.

Since HTTP and SMTP share message syntax but require very different
behaviour decrypting the TLS is a bit risky and may break rather badly
if the wrong connection happens to terminate at an HTTP proxy. Bugs and
limitations in the OWA HTTP(S) code make for a rather tricky situation
unless you can see exactly what is going on down to the TCP/IP level
when troubleshooting.



> -Original Message-
> From: Pedro Guedes
> 
> Hi
> 
> I have been reading some material on this and
> trying to reverse proxying squid on a diferent ssl port
> like 2020 an then connect to port 443 on the exchange.
> 
> Al the examples follow the configs on the 443 port, same
> on squid and exchange.
> 
> Looks like is no possible to putsquid  listening on a diferent
> port than 443 and then connecting to port 443 on
> exchange.
> 
> Is this true?

No. Squid can easily do that. Just setup the http(s)_port [OWA
client->Squid] and cache_peer [Squid->Exchange/OWA server] directives
however you want. Whether it "works" in context of what OWA is doing is
the questionable part, and not related to Squid.

The problem is what the OWA server can do, what the client software can
do - and what they tell each other in their messages. All of which has
to cope perfectly with the custom port you told Squid to use. Otherwise
you just see "broken".

 * Absolutely avoid URL-rewrite. This will only break things. Use proper
HTTP redirect if you really have to, and avoid changing anything at the
proxy if you can.

 * Avoid TPROXY and NAT intercept of the traffic. It can be coped with,
but adds MANY problems that are best to avoid here.

 * Be careful of the TLS settings on the proxy. OWA has some odd and
quite Microsoft specific things that is requires, and prefers.


As you found OWA itself does not permit port changes (easily?). I'm not
sure if it has improved in recent years with the "365" software
conversions, used to be not possible at all.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Exchange OWA 2016 behind squid

2018-07-11 Thread Pedro Guedes
Hi thanks for your answer.

But it is to vague...
I first tried putting owa exchange on port 2020
(I could do it in exchange 2003).
No way. Besides exchange 2016 only works over ssl.

So the idea is putting apache or squid listening on port
2020 to the public and then redirecting to 443 on exchange
inside the intranet. Because of certificates and a lot
of url redirection it becomes quite impossible.

The most difficult being exchange url redirection giving
port 443 to clients instead of 2020.

Probably maintaining the same port would do but
would defeat my plans to not advertise 443 to the
public.

Anyway all the faq solutions use 443 on squid and
exchange.

Apache way seems cleaner as it has directives to
change urls on transit but exchange is so complex
and undocumented that probably is, again, impossible.

Thanks anyway







On Wed, July 11, 2018 12:50, Mike Surcouf wrote:
> I am sure Amos wont mind me saying but nginx is the right tool for that 
> scenario.
> Squid is a great  forward proxy and I use it for our network but form 
> incoming connections
> nginx is more flexible and designed for the job.
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Pedro Guedes
>  Sent: 11 July 2018 12:41
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Exchange OWA 2016 behind squid
>
>
> Hi
>
>
> I have been reading some material on this and
> trying to reverse proxying squid on a diferent ssl port like 2020 an then 
> connect to port 443
> on the exchange.
>
> Al the examples follow the configs on the 443 port, same
> on squid and exchange.
>
> Looks like is no possible to putsquid  listening on a diferent
> port than 443 and then connecting to port 443 on exchange.
>
> Is this true?
> By the architecture it is not possible to make exchange owa
> work on a diferent port than 443.
>
>
>
>
> ___
> squid-users mailing list squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
>


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with peek and slice through parent proxy

2018-07-11 Thread Kedar K
On Wed, Jul 11, 2018 at 7:03 PM Hess, Niklas 
wrote:

> Hello list,
>
>
>
> I´m setting up a Squid proxy specifically to scan the incoming traffic
> from a cloud platform.
>
> ClamAV should scan the incoming traffic.
>
>
>
> So far so good.
>
>
>
> The cloud uses WebDAV over HTTPS, so I have to SSL-Bump the incoming
> traffic via Peek and Splice Feature.
>
> That works indeed with the CA signed internal Certificate.
>
>
>
> But as soon as I add a cache_peer as a “parent proxy” it does not work.
> (This request could not be forwarded to the origin server or to any parent
> caches.)
>
> I just get “FwdState.cc(813) connectStart: fwdConnectStart: Ssl bumped
> connections through parent proxy are not allowed” in the cache.log
>
>
>
> And yes I know ssl-bump through a parent proxy is an security issue and
> might be unsecure, but the connection to the parent is internal, save and
> secure.
>
> I don’t know how, but could there be a way to “comment out” the section in
> fwdConnectStart source file?
>
>
>
> Squid Cache: Version 3.5.27
>
> Service Name: squid
>
> configure options:  '--with-openssl' '--enable-ssl-crtd'
>
>
>
>
>
> Here´s my “minimal” SSL-Bump config:
>
>
>
> ### Start config
>
>
>
> debug_options ALL,6
>
> shutdown_lifetime 1 seconds
>
>
>
> http_port 8080 ssl-bump cert=/usr/local/squid/etc/ssl_cert/Squidtest.pem
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>
>
>
> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M 4MB
>
> sslcrtd_children 25 startup=5 idle=10
>
>
>
> cache_peer 10.106.3.66 parent 8080 0 no-query no-digest name=parent
>
>
>
> never_direct allow all
>
>
>
> sslproxy_cert_error allow all
>
> sslproxy_flags DONT_VERIFY_PEER
>
>
>
> ssl_bump bump all
>
​Did you forget to copy at_step acls?

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
​


>
>
> http_access allow all
>
>
>
>
>
> ### End config
>
>
>
> Thanks for any help!
>
> Niklas
>
>
>
> Azubi Niklas Hess
> *Team Applikation-Management*
>
> *Eigenbetrieb Informationstechnologie des Wetteraukreises*
> 61169 Friedberg
> Europaplatz
> Gebäude B
> Tel.: 06031 83-6526
> Mobil:
> Fax.: 06031 83-916526
> www.wetteraukreis.de
>
> Informationen zum Datenschutz erhalten sie über unsere Datenschutzseite
> www.datenschutz.wetterau.de
> Diese E-Mail enth
> ält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie
> nicht der richtige Adressat sind, informieren Sie bitte sofort den Absender
> und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die
> unbefugte Weitergabe dieser E-Mail ist nicht gestattet.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 

- Kedar
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Problems with peek and slice through parent proxy

2018-07-11 Thread Hess, Niklas
Hello list,

I´m setting up a Squid proxy specifically to scan the incoming traffic from a 
cloud platform.
ClamAV should scan the incoming traffic.

So far so good.

The cloud uses WebDAV over HTTPS, so I have to SSL-Bump the incoming traffic 
via Peek and Splice Feature.
That works indeed with the CA signed internal Certificate.

But as soon as I add a cache_peer as a "parent proxy" it does not work. (This 
request could not be forwarded to the origin server or to any parent caches.)
I just get "FwdState.cc(813) connectStart: fwdConnectStart: Ssl bumped 
connections through parent proxy are not allowed" in the cache.log

And yes I know ssl-bump through a parent proxy is an security issue and might 
be unsecure, but the connection to the parent is internal, save and secure.
I don't know how, but could there be a way to "comment out" the section in 
fwdConnectStart source file?

Squid Cache: Version 3.5.27
Service Name: squid
configure options:  '--with-openssl' '--enable-ssl-crtd'


Here´s my "minimal" SSL-Bump config:

### Start config

debug_options ALL,6
shutdown_lifetime 1 seconds

http_port 8080 ssl-bump cert=/usr/local/squid/etc/ssl_cert/Squidtest.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB

sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/ssl_db -M 4MB
sslcrtd_children 25 startup=5 idle=10

cache_peer 10.106.3.66 parent 8080 0 no-query no-digest name=parent

never_direct allow all

sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER

ssl_bump bump all

http_access allow all


### End config

Thanks for any help!
Niklas


Azubi Niklas Hess
Team Applikation-Management

Eigenbetrieb Informationstechnologie des Wetteraukreises
61169 Friedberg
Europaplatz
Gebäude B
Tel.: 06031 83-6526
Mobil:
Fax.: 06031 83-916526
www.wetteraukreis.de

Informationen zum Datenschutz erhalten sie über unsere Datenschutzseite 
www.datenschutz.wetterau.de
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind, informieren Sie bitte sofort den 
Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die 
unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4.1 works great ;)

2018-07-11 Thread Dieter Bloms
Hi,

I run squid4.1 for several days in production and have to say it works
pretty good.
It is stable and it downloads the missing intermediate certificates
automatically.

Great work!

Thank you very much for this version.


-- 
Regards

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails

2018-07-11 Thread Amos Jeffries
On 11/07/18 22:39, pete dawgg wrote:
> Hello list,
> 
> i run squid 3.5.27 with some special settings for windows updates as 
> suggested here: 
> https://wiki.squid-cache.org/ConfigExamples/Caching/WindowsUpdates It's been 
> running almost trouble-free for some time, but for ~2 months the 
> cache-partition has been filling up to 100% (space; inodes were OK) and squid 
> then failed.
> 

That implies that either your cache_dir size accounting is VERY badly
broken, something else is filling the disk (eg failing to rotate
swap.state journals), or disk purging is not able to keep up with the
traffic flow.


> the cache-dir is on a 100GB ext2-partition and configured like this:
> 

Hmm, a partition. What else is using the same physical disk?
 Squid puts such random I/O pattern on cache disks its best not to be
using the actual physical drive for other things in parallel - they can
slow Squid down, and conversely Squid can cause problems to other uses
by flooding the disk controller queues.


> cache_dir aufs /mnt/cache/squid 75000 16 256

These numbers do matter for ext2 more than for other FS types. You need
them to be large enough not to allocate too many inodes per directory. I
would use "64 256" here, or even "128 256" for a bigger safety margin.

(I *think* modern ext2 implementations have resolved the core issue, but
that may be wrong and ext2 is old enough to be wary.)


> cache_swap_low 60
> cache_swap_high 75
> minimum_object_size 0 KB
> maximum_object_size 6000 MB

If you bumped this for the Win8 sizes mentioned in our wiki, the Win10
major updates have bumped sizes up again past 10GB. So you may need to
increase this.


> 
> some special settings for the windows updates:
> range_offset_limit 6000 MB

Add the ACLs necessary to restrict this to WU traffic. Its really hard
on cache space**, so should not be allowed to just any traffic.


** What I mean by that is it may result in N parallel fetches of the
entire object unless collapsed forwarding feature is used.
 In regards to your situation; consider a 10GB WU object being fetched
10 times -> 10*10 GB of disk space required just to fetch. Which
over-fills your available 45GB (60% of 75000 MB [cache_swap_low/100 *
cache_dir] ). And 11 will overflow your whole disk.



> maximum_object_size 6000 MB
> quick_abort_min -1
> quick_abort_max -1
> quick_abort_pct -1
> 
> when i restart squid with its initscript it sometimes expunges some stuff 
> from the cache but then fails again after a short while:
> before restart:
> /dev/sdb299G 93G  863M  100% /mnt/cache
> after restart:
> /dev/sdb299G 87G  7,4G   93% /mnt/cache
> 

How much of that /mnt/cache size is in /mnt/cache/squid ?

Is it one physical HDD spindle (versus a RAID drive) ?


>
> there are two types of errors in cache.log:
> FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-cf__metadata.shm): (2) No such file or directory

The cf__metadata.shm error is quite bad - it means your collapsed
forwarding is now working well. Which implies it is not preventing the
disk overflow on parallel huge WU fetches.

Are you able to try the new Squid-4? there are some collapsed forwarding
and cache management changes that may fix or allow better diagnosis of
these particularly and maybe your disk usage problem.


> FATAL: Failed to rename log file /mnt/cache/squid/swap.state.new to
/mnt/cache/squid/swap.state

This is suspicious, how large are those swap files?

Does your proxy have correct access permissions on them and the
directories in their path - both Unix filesystem and SELinux / AppArmour
/ whatever your system uses for advanced access matter here.

Same things to check for the /dev/shm device and *.shm file access error
above. But /dev/shm should be root things rather than Squid user access.


>
> What should i do to make squid work with windows updates reliably again?

Some other things you can check;

You can try to make the cache_swap_high/low be closer together and much
larger (eg the default 90 and 95 values). Current 3.5 have fixed the bug
which made smaller values necessary on some earlier installs.


If you can afford the delays it introduces to restart, you could run a
full scan of the cached data (stop Squid, delete the swap.state* files,
then restart Squid and wait).
 - you could do that with a copy of Squid not handling user traffic if
necessary, but the running one cannot use the cache while its happening.


Otherwise, have you tried purging the entire cache and starting Squid
with a clean slate?
 that would be a lot faster for recovery than the above scan. But does
have a bit more bandwidth spent short-term while re-filling the cache.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] store_id_extras to access request header

2018-07-11 Thread Kedar K
With following config to access request header;
store_id_extras "%>h %>a/%>A %un %>rm myip=%la myport=%lp"

the store_id_extras does not seem to forward header info.

Is there something that I might be missing?

Again, I see the header in access.log; However the same is missing from the
data send to store-id helper.


On Wed, Jul 11, 2018 at 3:29 PM Kedar K  wrote:

> That was a false alarm; it actually cached only the redirected url and the
> key generated by store-id helper was not used.
>
> On Wed, Jul 11, 2018 at 2:49 PM Kedar K  wrote:
>
>> It worked with a combination of store-id helper and url rewriter.
>>
>> - Kedar
>>
>> On Wed, Jul 11, 2018 at 11:42 AM Kedar K  wrote:
>>
>>> additional note:
>>> I do see both request and response header in access.log though.
>>>
>>> On Wed, Jul 11, 2018 at 11:29 AM Kedar K 
>>> wrote:
>>>
 Hi,
 I tried to get the request header to store id helper
 with %>h option for store_id_extras; However, I get a '-' (and the
 default k-v pairs intact)

 Is this expected behaviour? Wouldn't request header be available before
 sending a query to store-id helper?

 ​My use case was to pass custom fields either as part of URL (append at
 the end) or request header.

 Is it possible to use combination of store_id_program helper and
 rewrite_url_program; such that the extra params from the url are used by
 store-id helper to create a store-id and then the url_rewrite program can
 strip them off before sending the request to origin server? ​


 --

 *- Kedar*

>>>
>>>
>>> --
>>>
>>>
>>>
>>
>> --
>>
>>
>>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Exchange OWA 2016 behind squid

2018-07-11 Thread Mike Surcouf
I am sure Amos wont mind me saying but nginx is the right tool for that 
scenario.
Squid is a great  forward proxy and I use it for our network but form incoming 
connections nginx is more flexible and designed for the job.

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Pedro Guedes
Sent: 11 July 2018 12:41
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Exchange OWA 2016 behind squid

Hi

I have been reading some material on this and
trying to reverse proxying squid on a diferent ssl port
like 2020 an then connect to port 443 on the exchange.

Al the examples follow the configs on the 443 port, same
on squid and exchange.

Looks like is no possible to putsquid  listening on a diferent
port than 443 and then connecting to port 443 on
exchange.

Is this true?
By the architecture it is not possible to make exchange owa
work on a diferent port than 443.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Exchange OWA 2016 behind squid

2018-07-11 Thread Pedro Guedes
Hi

I have been reading some material on this and
trying to reverse proxying squid on a diferent ssl port
like 2020 an then connect to port 443 on the exchange.

Al the examples follow the configs on the 443 port, same
on squid and exchange.

Looks like is no possible to putsquid  listening on a diferent
port than 443 and then connecting to port 443 on
exchange.

Is this true?
By the architecture it is not possible to make exchange owa
work on a diferent port than 443.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails

2018-07-11 Thread pete dawgg
Hello list,

i run squid 3.5.27 with some special settings for windows updates as suggested 
here: https://wiki.squid-cache.org/ConfigExamples/Caching/WindowsUpdates It's 
been running almost trouble-free for some time, but for ~2 months the 
cache-partition has been filling up to 100% (space; inodes were OK) and squid 
then failed.

the cache-dir is on a 100GB ext2-partition and configured like this:

cache_dir aufs /mnt/cache/squid 75000 16 256
cache_swap_low 60
cache_swap_high 75
minimum_object_size 0 KB
maximum_object_size 6000 MB

some special settings for the windows updates:
range_offset_limit 6000 MB
maximum_object_size 6000 MB
quick_abort_min -1
quick_abort_max -1
quick_abort_pct -1

when i restart squid with its initscript it sometimes expunges some stuff from 
the cache but then fails again after a short while:
before restart:
/dev/sdb299G 93G  863M  100% /mnt/cache
after restart:
/dev/sdb299G 87G  7,4G   93% /mnt/cache

there are two types of errors in cache.log:
FATAL: Ipc::Mem::Segment::open failed to shm_open(/squid-cf__metadata.shm): (2) 
No such file or directory
FATAL: Failed to rename log file /mnt/cache/squid/swap.state.new to 
/mnt/cache/squid/swap.state

What should i do to make squid work with windows updates reliably again?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] store_id_extras to access request header

2018-07-11 Thread Kedar K
That was a false alarm; it actually cached only the redirected url and the
key generated by store-id helper was not used.

On Wed, Jul 11, 2018 at 2:49 PM Kedar K  wrote:

> It worked with a combination of store-id helper and url rewriter.
>
> - Kedar
>
> On Wed, Jul 11, 2018 at 11:42 AM Kedar K  wrote:
>
>> additional note:
>> I do see both request and response header in access.log though.
>>
>> On Wed, Jul 11, 2018 at 11:29 AM Kedar K  wrote:
>>
>>> Hi,
>>> I tried to get the request header to store id helper
>>> with %>h option for store_id_extras; However, I get a '-' (and the
>>> default k-v pairs intact)
>>>
>>> Is this expected behaviour? Wouldn't request header be available before
>>> sending a query to store-id helper?
>>>
>>> ​My use case was to pass custom fields either as part of URL (append at
>>> the end) or request header.
>>>
>>> Is it possible to use combination of store_id_program helper and
>>> rewrite_url_program; such that the extra params from the url are used by
>>> store-id helper to create a store-id and then the url_rewrite program can
>>> strip them off before sending the request to origin server? ​
>>>
>>>
>>> --
>>>
>>> *- Kedar*
>>>
>>
>>
>> --
>>
>> *- Kedar Kekan*
>>
>
>
> --
>
> *- Kedar Kekan*
>


-- 

*- Kedar Kekan*
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] store_id_extras to access request header

2018-07-11 Thread Kedar K
It worked with a combination of store-id helper and url rewriter.

- Kedar

On Wed, Jul 11, 2018 at 11:42 AM Kedar K  wrote:

> additional note:
> I do see both request and response header in access.log though.
>
> On Wed, Jul 11, 2018 at 11:29 AM Kedar K  wrote:
>
>> Hi,
>> I tried to get the request header to store id helper
>> with %>h option for store_id_extras; However, I get a '-' (and the
>> default k-v pairs intact)
>>
>> Is this expected behaviour? Wouldn't request header be available before
>> sending a query to store-id helper?
>>
>> ​My use case was to pass custom fields either as part of URL (append at
>> the end) or request header.
>>
>> Is it possible to use combination of store_id_program helper and
>> rewrite_url_program; such that the extra params from the url are used by
>> store-id helper to create a store-id and then the url_rewrite program can
>> strip them off before sending the request to origin server? ​
>>
>>
>> --
>>
>> *- Kedar*
>>
>
>
> --
>
> *- Kedar Kekan*
>


-- 

*- Kedar Kekan*
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users