Re: [squid-users] Errors in cache.log

2016-09-22 Thread Amos Jeffries
On 23/09/2016 6:11 a.m., erdosain9 wrote:
> Hi.
> Im having this message in cache.log
> 
>  Error negotiating SSL on FD 121: error::lib(0):func(0):reason(0)
> (5/0/0)
> 2016/09/22 14:20:36 kid1| BUG: Unexpected state while connecting to a
> cache_peer or origin server
> 2016/09/22 14:29:23 kid1| Error negotiating SSL connection on FD 33:
> error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
> 2016/09/22 14:29:24 kid1| Error negotiating SSL connection on FD 33:
> error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
> 2016/09/22 14:32:02 kid1| WARNING: HTTP: Invalid Response: No object data
> received for
> https://r3---sn-q4f7sn7l.googlevideo.com/videoplayback?pl=21&itag=242&dur=3973.160&source=youtube&keepalive=yes&expire=1474585097&mime=video%2Fwebm&signature=14F5607A717C8A9DD579A55C69F6CDF4C2308FC5.47CE0A2B9F54F2C8C4042FF7797A08708951C276&clen=12532131&gir=yes&key=cms1&ip=190.113.224.106&ipbits=0&id=o-ABdgV3GldzFU1c8jBp6s_27abh9g_9c3hA0dbUgDkgjM&upn=4Xlvne-1RrE&lmt=1449585312748326&sparams=clen,dur,ei,expire,gir,id,initcwndbps,ip,ipbits,itag,keepalive,lmt,mime,mm,mn,ms,mv,nh,pl,requiressl,source,upn&ei=qQ3kV7rtJNT0wQTR0I_oDg&requiressl=yes&cpn=jdZ-1Vthefy2q9Og&alr=yes&ratebypass=yes&c=WEB&cver=1.20160921&redirect_counter=1&req_id=ad722d06312b3cfc&cms_redirect=yes&mm=34&mn=sn-q4f7sn7l&ms=ltu&mt=1474563648&mv=m&nh=IgpwcjAyLmV6ZTAxKgkxMjcuMC4wLjE&range=8224151-8353222&rn=840&rbuf=585667
> AKA
> r3---sn-q4f7sn7l.googlevideo.com/videoplayback?pl=21&itag=242&dur=3973.160&source=youtube&keepalive=yes&expire=1474585097&mime=video%2Fwebm&signature=14F5607A717C8A9DD579A55C69F6CDF4C2308FC5.47CE0A2B9F54F2C8C4042FF7797A08708951C276&clen=12532131&gir=yes&key=cms1&ip=190.113.224.106&ipbits=0&id=o-ABdgV3GldzFU1c8jBp6s_27abh9g_9c3hA0dbUgDkgjM&upn=4Xlvne-1RrE&lmt=1449585312748326&sparams=clen,dur,ei,expire,gir,id,initcwndbps,ip,ipbits,itag,keepalive,lmt,mime,mm,mn,ms,mv,nh,pl,requiressl,source,upn&ei=qQ3kV7rtJNT0wQTR0I_oDg&requiressl=yes&cpn=jdZ-1Vthefy2q9Og&alr=yes&ratebypass=yes&c=WEB&cver=1.20160921&redirect_counter=1&req_id=ad722d06312b3cfc&cms_redirect=yes&mm=34&mn=sn-q4f7sn7l&ms=ltu&mt=1474563648&mv=m&nh=IgpwcjAyLmV6ZTAxKgkxMjcuMC4wLjE&range=8224151-8353222&rn=840&rbuf=585667
> Error negotiating SSL on FD 91: error::lib(0):func(0):reason(0)
> (5/-1/104)
> 

Firstly, it is not one message. It is 4 and one partial message.

They may be related log entries, or maybe not. It is hard to say when
they are occuring across ~12 minutes. A single TLS handshake should be
much faster than that, so I suspect they are a mix of at least three
different transactions worth of info.


> 
> 
> Sometimes in webbrowser give something like bad CA
> 
> or this (IPV6??)
> 

The below error is not necessarily IPv6 related. It shows an IPv6
address because you configured "dns_v4_first on", so the _last_ thing to
be tried was that IPv6 address.

What it means is that *all* the IPv4 and IPv6 ways to contact the server
are not working.


> The following error was encountered while trying to retrieve the URL:
> https://www.facebook.com/*
> 
> Connection to 2a03:2880:f105:83:face:b00c:0:25de failed.
> 
> The system returned: (101) Network is unreachable
> 
> The remote host or network may be down. Please try the request again.
> 
> Your cache administrator is webmaster.
> 
> This is my config
> 
> #
> # Recommended minimum configuration:
> #
> 



> 
> GRUPOS DE IP
> acl full src "/etc/squid/ips/full.lst"
> acl limitado src "/etc/squid/ips/limitado.lst"
> acl sistemas src "/etc/squid/ips/sistemas.lst"
> acl adminis  src "/etc/squid/ips/adminis.lst"



> 
> Bloquea Publicidad ( http://pgl.yoyo.org/adservers/ )
> acl ads dstdom_regex "/etc/squid/listas/ad_block.lst"
> http_access deny ads



> acl stream url_regex -i \.flv$
> acl stream url_regex -i \.mp4$
> acl stream url_regex -i watch?
> acl stream url_regex -i youtube
> acl stream url_regex -i facebook
> acl stream url_regex -i fbcdn\.net\/v\/(.*\.mp4)\?
> acl stream url_regex -i fbcdn\.net\/v\/(.*\.jpg)\? 
> acl stream url_regex -i akamaihd\.net\/v\/(.*\.mp4)\?
> acl stream url_regex -i akamaihd\.net\/v\/(.*\.jpg)\?
> 
> ##Dominios denegados
> acl dominios_denegados dstdomain "/etc/squid/listas/dominios_denegados.lst"
> 
> ##Extensiones bloqueadas
> acl multimedia urlpath_regex "/etc/squid/listas/multimedia.lst"
> 
> ##Extensiones peligrosas
> acl peligrosos urlpath_regex "/etc/squid/listas/peligrosos.lst"
> 
> #Bypass squid
> #acl bypass_dst_dom  dstdomain "/etc/squid/listas/bypass_dst_domain.lst"
> 
> ##Redes sociales
> acl redes_sociales url_regex -i “/etc/squid/listas/redes_sociales.lst”
> 
> 
> #Puertos
> acl SSL_ports port 443
> acl SSL_ports port 8443
> acl SSL_ports port 8080
> 
> acl Safe_ports port 631   # httpCUPS
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443   # https
> acl Safe_ports port 70# gopher
> acl 

Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-22 Thread Jason Haar
On Tue, Sep 20, 2016 at 8:39 PM, FredB  wrote:

> I'm searching a way to use a secure SSO with Squid, how did you implement
> the authenticate method with an implicit proxy ?
> I'm reading many documentations about SAML, but I found nothing about Squid
>
> I guess we can only do something with cookies ?
>

Hi Fred

Proxies only support "HTTP authentication" methods: Basic, Digest, NTLM
,etc. So you either have to use one of those, or perhaps "fake" the
creation of one of those...?

eg you mentioned SAML, but gave no context beyond saying you didn't want
AD. So let's say SAML is a requirement. Well that's directly impossible as
it isn't an "HTTP authentication" method, but you could hit it from the
sides...

How about putting a SAML SP on your squid server, and it generates fresh
random Digest authentication creds for any authenticated user (ie same
username, but 30char random password), and tells them to cut-n-paste them
into their web browser proxy prompt and "save" them. That way the proxy is
using Digest and it involved a one-off SAML interaction. I say Digest
instead of Basic because Digest is more secure over cleartext - but it's
also noticeably slower than Basic over latency links, so you can choose
your poison there

If you're really keen, you can actually do proxy-over-TLS via WPAD with
Firefox/Chrome - at which point I'd definitely recommend Basic for the
performance reasons ;-)



-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Accelerator Mode - HSTS and Redirect

2016-09-22 Thread Amos Jeffries
On 23/09/2016 12:45 p.m., creditu wrote:
> We have been using squid in accelerator mode for a number of years. In
> the current setup we have the squid frontends that send all the http
> requests to the backend apache webservers using a simple redirect
> script.  We need to switch to https for the public presence.

redirect/rewrite script is very rarely a suitable way to do this for
reverse-proxy.

Use cache_peer to configure what backend servers exist and
cache_peer_access rules to determine which one(s) any given request can
be sent to.

The backends should be capable of accepting the traffic as if the proxy
were not there. If for some reason it has to have a different domain
name (actual need for this is rare), then the cache_peer forcedomain=
option can be used.

> 
> So, our initial thought would be to use https_port for public HTTPS
> presence and send the requests using cache_peer to the backend apache
> servers using plain http.  Basically terminating HTTPS from clients and
> relaying it to backend servers using HTTP.  
> 
> We will need to implement HSTS at some point (i.e.
> Strict-Transport-Security: max-age=; includeSubDomains; preload),
> will we be able to do this in the above scenario?
> 

Yes. Provided you can get rid of that redirect/rewrite script. The
background things cache_peer logic does to the traffic will be needed
for the HTTPS transition.


> Also, we will initially be providing both http and https, but will need
> to stop http at some point.  Is there a way to redirect the clients that
> try to connect via http to use https with squid?  Something like the
> rewrite engine in apache?

cache_peer can be configured to contact the peer over TLS. This can be
done individually, and before the HSTS gets added for public viewing.

> 
> We use RH 6.x which comes with squid 3.1.  Thanks for any feedback. 

For your particular use a build of that with OpenSSL support should be
okay. But if you can, an upgrade to more recent version would be better
as there have been some important OpenSSL and TLS protocol changes since
3.1 was designed.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Accelerator Mode - HSTS and Redirect

2016-09-22 Thread creditu
We have been using squid in accelerator mode for a number of years. In
the current setup we have the squid frontends that send all the http
requests to the backend apache webservers using a simple redirect
script.  We need to switch to https for the public presence.

So, our initial thought would be to use https_port for public HTTPS
presence and send the requests using cache_peer to the backend apache
servers using plain http.  Basically terminating HTTPS from clients and
relaying it to backend servers using HTTP.  

We will need to implement HSTS at some point (i.e.
Strict-Transport-Security: max-age=; includeSubDomains; preload),
will we be able to do this in the above scenario?

Also, we will initially be providing both http and https, but will need
to stop http at some point.  Is there a way to redirect the clients that
try to connect via http to use https with squid?  Something like the
rewrite engine in apache?

We use RH 6.x which comes with squid 3.1.  Thanks for any feedback. 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about the url rewrite before proxy out

2016-09-22 Thread Bill Yuan
Thanks for replying

I see, it is a url rewrite  and i am trying to find an sample to have a test


the content are not required to be updated if it is a image


On Thursday, September 22, 2016, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 09/22/2016 04:20 AM, squid-us...@filter.luko.org  wrote:
>
> > Then not only the request needs to be rewritten, but probably the
> > page content too. [...] If that is the case,
> > then Squid doesn't seem like the right tool for the job.
>
> Why not? If rewriting is needed, an ICAP or eCAP service can rewrite the
> response body before Squid serves it.
>
> However, in practice, this kind of page rewriting does not work very
> well (regardless of which software is doing the rewrite) because many
> page URLs are formed dynamically on the client side (by Javascript code).
>
> Alex.
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org 
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] libevent

2016-09-22 Thread reinerotto
Although off topic, 
>Oh, yes, we've seen. Bugs can not be closed for years. If the bug is not
obvious or can not be replayed in one action - it is ignored. <

there is no software (besides mine :-) which is free of bugs. So the amount
of bugs still present simply should be "managable". More or less one-time
effects might even be caused by compiler glitches or "language pecularities"
(the decision to switch from C was a big mistake in my opinion, increasing
the chances of such effects) _or_ very seldom timing issues. Last not least,
you still have the option to sponsor fixing a bug, which for any reason
especially harms you. We are talking about opensource, better to say,
(mostly) free-of-charge software.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/libevent-tp4679637p4679656.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] libevent

2016-09-22 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


23.09.2016 3:33, reinerotto пишет:
>> You are too few in number to provide something decent enough, and not from
> the last century.<
> The smaller the development team, the more efficient it is. Highly
qualified
> staff assumed.
Oh, yes, we've seen. Bugs can not be closed for years. If the bug is not
obvious or can not be replayed in one action - it is ignored.
>
> And LINUX is as suitable to event-driven programming as MVS.
> Therefore, (bad) compromise has to be made.
I will not argue. We are not here maxims throws.
>
>
>
>
> --
> View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/libevent-tp4679637p4679654.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJX5FA3AAoJENNXIZxhPexGNeIIAI4pVwbIdKw++Q/yh+kDtfK1
RzvQ4uw1RfilLAOSm5X+qHHapemiFxeXowxTKmWA2PTLjK+XEkU4SMFlmjMX3yn1
wEBeVF4He+j7LQrz8zaB2eXP519GZkjsZVjY3tOGIFwAOSbFajca42wuJHxWQiCn
gyFfVr1lk4DzXa6gtF+WrZfhHtk1+PeypGK/GM/0sXIKO2CF07kgm2yHAOpGJnIc
InC4CKIahkBDJGy5ldKpqd9BetWNyV6ScPZS5ynXupL2/6uKJCRp3+xTrVu2anJz
/8ORWVcuTYAQbVUQC5wLeih9BJ2idpWNi/ennxJCxmNdX/Yu/3neXv3Vq3GsSfQ=
=yFcg
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] libevent

2016-09-22 Thread reinerotto
>You are too few in number to provide something decent enough, and not from
the last century.<
The smaller the development team, the more efficient it is. Highly qualified
staff assumed.
And LINUX is as suitable to event-driven programming as MVS.
Therefore, (bad) compromise has to be made.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/libevent-tp4679637p4679654.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO (kerberos)

2016-09-22 Thread Markus Moeller

Hi

 Did you try the debug option -d for ext_kerberos_ldap_group_acl  to get 
some debug ? Maybe it gives some indication of the problem ?


Markus

"erdosain9"  wrote in message 
news:1474570767416-4679652.p...@n4.nabble.com...


So, i have a little more of info

this is config

###Kerberos Auth with ActiveDirectory###
auth_param negotiate program /lib64/squid/negotiate_kerberos_auth -d -s
HTTP/squid.example@example.lan
auth_param negotiate children 10
auth_param negotiate keep_alive on

#acl auth proxy_auth REQUIRED

external_acl_type i-limitado-krb children=10 cache=10 grace=15 %LOGIN
/usr/lib64/squid/ext_kerberos_ldap_group_acl -a -g i-limit...@example.lan

acl i-limitado external i-limitado-krb
http_access allow i-limitado



AND HAVE THIS ERROR
The grupos helpers are crashing too rapidly, need help!

"grupos" is for "group" in AD (samba)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SSO-kerberos-tp4679470p4679652.html

Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users 



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO (kerberos)

2016-09-22 Thread erdosain9
So, i have a little more of info

this is config

###Kerberos Auth with ActiveDirectory###
auth_param negotiate program /lib64/squid/negotiate_kerberos_auth -d -s
HTTP/squid.example@example.lan
auth_param negotiate children 10
auth_param negotiate keep_alive on

#acl auth proxy_auth REQUIRED

external_acl_type i-limitado-krb children=10 cache=10 grace=15 %LOGIN
/usr/lib64/squid/ext_kerberos_ldap_group_acl -a -g i-limit...@example.lan

acl i-limitado external i-limitado-krb
http_access allow i-limitado



AND HAVE THIS ERROR
The grupos helpers are crashing too rapidly, need help!

"grupos" is for "group" in AD (samba)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SSO-kerberos-tp4679470p4679652.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Errors in cache.log

2016-09-22 Thread erdosain9
Hi.
Im having this message in cache.log

 Error negotiating SSL on FD 121: error::lib(0):func(0):reason(0)
(5/0/0)
2016/09/22 14:20:36 kid1| BUG: Unexpected state while connecting to a
cache_peer or origin server
2016/09/22 14:29:23 kid1| Error negotiating SSL connection on FD 33:
error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
2016/09/22 14:29:24 kid1| Error negotiating SSL connection on FD 33:
error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
2016/09/22 14:32:02 kid1| WARNING: HTTP: Invalid Response: No object data
received for
https://r3---sn-q4f7sn7l.googlevideo.com/videoplayback?pl=21&itag=242&dur=3973.160&source=youtube&keepalive=yes&expire=1474585097&mime=video%2Fwebm&signature=14F5607A717C8A9DD579A55C69F6CDF4C2308FC5.47CE0A2B9F54F2C8C4042FF7797A08708951C276&clen=12532131&gir=yes&key=cms1&ip=190.113.224.106&ipbits=0&id=o-ABdgV3GldzFU1c8jBp6s_27abh9g_9c3hA0dbUgDkgjM&upn=4Xlvne-1RrE&lmt=1449585312748326&sparams=clen,dur,ei,expire,gir,id,initcwndbps,ip,ipbits,itag,keepalive,lmt,mime,mm,mn,ms,mv,nh,pl,requiressl,source,upn&ei=qQ3kV7rtJNT0wQTR0I_oDg&requiressl=yes&cpn=jdZ-1Vthefy2q9Og&alr=yes&ratebypass=yes&c=WEB&cver=1.20160921&redirect_counter=1&req_id=ad722d06312b3cfc&cms_redirect=yes&mm=34&mn=sn-q4f7sn7l&ms=ltu&mt=1474563648&mv=m&nh=IgpwcjAyLmV6ZTAxKgkxMjcuMC4wLjE&range=8224151-8353222&rn=840&rbuf=585667
AKA
r3---sn-q4f7sn7l.googlevideo.com/videoplayback?pl=21&itag=242&dur=3973.160&source=youtube&keepalive=yes&expire=1474585097&mime=video%2Fwebm&signature=14F5607A717C8A9DD579A55C69F6CDF4C2308FC5.47CE0A2B9F54F2C8C4042FF7797A08708951C276&clen=12532131&gir=yes&key=cms1&ip=190.113.224.106&ipbits=0&id=o-ABdgV3GldzFU1c8jBp6s_27abh9g_9c3hA0dbUgDkgjM&upn=4Xlvne-1RrE&lmt=1449585312748326&sparams=clen,dur,ei,expire,gir,id,initcwndbps,ip,ipbits,itag,keepalive,lmt,mime,mm,mn,ms,mv,nh,pl,requiressl,source,upn&ei=qQ3kV7rtJNT0wQTR0I_oDg&requiressl=yes&cpn=jdZ-1Vthefy2q9Og&alr=yes&ratebypass=yes&c=WEB&cver=1.20160921&redirect_counter=1&req_id=ad722d06312b3cfc&cms_redirect=yes&mm=34&mn=sn-q4f7sn7l&ms=ltu&mt=1474563648&mv=m&nh=IgpwcjAyLmV6ZTAxKgkxMjcuMC4wLjE&range=8224151-8353222&rn=840&rbuf=585667
Error negotiating SSL on FD 91: error::lib(0):func(0):reason(0)
(5/-1/104)



Sometimes in webbrowser give something like bad CA

or this (IPV6??)

The following error was encountered while trying to retrieve the URL:
https://www.facebook.com/*

Connection to 2a03:2880:f105:83:face:b00c:0:25de failed.

The system returned: (101) Network is unreachable

The remote host or network may be down. Please try the request again.

Your cache administrator is webmaster.

This is my config

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 10.0.0.0/8# RFC1918 possible internal network
#acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
#acl localnet src 192.168.0.0/16# RFC1918 possible internal network
#IPV6 Deshabilitado
#acl localnet src fc00::/7   # RFC 4193 local private network range
#acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
#acl lan1_network src 192.168.1.0/24 ###Red 1 completa (no esta allowed)
#acl adminsquid src 192.168.1.172 #Administrar squid

#allways_direct allow lan1_network

GRUPOS DE IP
acl full src "/etc/squid/ips/full.lst"
acl limitado src "/etc/squid/ips/limitado.lst"
acl sistemas src "/etc/squid/ips/sistemas.lst"
acl adminis  src "/etc/squid/ips/adminis.lst"


#Sitios monitorizados
#acl monitoredSites ssl::server_name "/etc/squid/listas/monitorizados.lst"
 

Bloquea Publicidad ( http://pgl.yoyo.org/adservers/ )
acl ads dstdom_regex "/etc/squid/listas/ad_block.lst"
http_access deny ads
#deny_info TCP_RESET ads

Streaming
#acl youtube dstdomain .googlevideo.com
#acl youtube dstdomain .fbcdn.net
#acl youtube dstdomain .akamaihd.net
acl stream url_regex -i \.flv$
acl stream url_regex -i \.mp4$
acl stream url_regex -i watch?
acl stream url_regex -i youtube
acl stream url_regex -i facebook
acl stream url_regex -i fbcdn\.net\/v\/(.*\.mp4)\?
acl stream url_regex -i fbcdn\.net\/v\/(.*\.jpg)\? 
acl stream url_regex -i akamaihd\.net\/v\/(.*\.mp4)\?
acl stream url_regex -i akamaihd\.net\/v\/(.*\.jpg)\?

##Dominios denegados
acl dominios_denegados dstdomain "/etc/squid/listas/dominios_denegados.lst"

##Extensiones bloqueadas
acl multimedia urlpath_regex "/etc/squid/listas/multimedia.lst"

##Extensiones peligrosas
acl peligrosos urlpath_regex "/etc/squid/listas/peligrosos.lst"

#Bypass squid
#acl bypass_dst_dom  dstdomain "/etc/squid/listas/bypass_dst_domain.lst"

##Redes sociales
acl redes_sociales url_regex -i “/etc/squid/listas/redes_sociales.lst”


#Puertos
acl SSL_ports port 443
acl SSL_ports port 8443
acl SSL_ports port 8080

acl Safe_ports port 631 # httpCUPS
acl Safe_ports port 80  # http
acl Safe_ports port 21

Re: [squid-users] Question about the url rewrite before proxy out

2016-09-22 Thread Alex Rousskov
On 09/22/2016 04:20 AM, squid-us...@filter.luko.org wrote:

> Then not only the request needs to be rewritten, but probably the 
> page content too. [...] If that is the case,
> then Squid doesn't seem like the right tool for the job.

Why not? If rewriting is needed, an ICAP or eCAP service can rewrite the
response body before Squid serves it.

However, in practice, this kind of page rewriting does not work very
well (regardless of which software is doing the rewrite) because many
page URLs are formed dynamically on the client side (by Javascript code).

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about the url rewrite before proxy out

2016-09-22 Thread squid-users
> 
> 
> > If you input http://www.yahoo.com/page.html, this will be transformed
> > to http://192.168.1.1/www.google.com/page.html.
> 
> I got the impression that the OP wanted the rewrite to work the other way
> around.

My apologies, that does seem to be the case.

> Squid sees http://192.168.1.1/www.google.com and  re-writes it to
> http://www.google.com
> 
> > The helper just needs to print that out prepended by "OK rewrite-
> url=xxx".
> > More info at
> > http://www.squid-cache.org/Doc/config/url_rewrite_program/
> >
> > Of course, you will need something listening on 192.168.1.1 (Apache,
> > nginx,
> > whatever) that can deal with those rewritten requests.
> 
> I got the impression that the OP wanted Squid to be listening on this
> address, doing the rewrites, and then fetching from standard origin
> servers.

Then not only the request needs to be rewritten, but probably the page content 
too.  Eg, assets in the page will all be pointing at 
http://www.yahoo.com/image.png and also need transforming to 
http://192.168.1.1/www.yahoo.com/image.png.

If that is the case, then Squid doesn't seem like the right tool for the job.  
I think CGIproxy can do this (https://www.jmarshall.com/tools/cgiproxy/) or 
perhaps Apache's mod_proxy 
(https://httpd.apache.org/docs/current/mod/mod_proxy.html) would work.

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about the url rewrite before proxy out

2016-09-22 Thread Antony Stone
On Thursday 22 Sep 2016 at 06:04, squid-us...@filter.luko.org wrote:

> > i am looking for a proxy which can "bounce" the request, which is not a
> > classic proxy.
> > 
> > I want it works in this way.
> > 
> > e.g. a proxy is running a 192.168.1.1
> > and when i want to open http://www.yahoo.com, i just need call
> > http://192.168.1.1/www.yahoo.com the proxy can pickup the the host
> > "http://www.yahoo.com"; from the URI, and retrieve the info for me​, so
> > it need to get the new $host from $location, and remove the $host from
> > the $location before proxy pass it. it is doable via squid?
> 
> Yes it is doable (but unusual).  First you need to tell Squid which requests
> should be rewritten, then send them to a rewrite program to be transformed. 
> Identify the domains like this:



> If you input http://www.yahoo.com/page.html, this will be transformed to
> http://192.168.1.1/www.google.com/page.html.

I got the impression that the OP wanted the rewrite to work the other way 
around.

Squid sees http://192.168.1.1/www.google.com and  re-writes it to 
http://www.google.com

> The helper just needs to print that out prepended by "OK rewrite-url=xxx". 
> More info at http://www.squid-cache.org/Doc/config/url_rewrite_program/
> 
> Of course, you will need something listening on 192.168.1.1 (Apache, nginx,
> whatever) that can deal with those rewritten requests.

I got the impression that the OP wanted Squid to be listening on this address, 
doing the rewrites, and then fetching from standard origin servers.

> That is an unusual way of getting requests to 192.168.1.1 though, because
> you are effectively putting the hostname component into the URL then sending
> it to a web service and expecting it to deal with that.

Yes, that's what the OP wants Squid to handle, I think.


Antony.

-- 
"640 kilobytes (of RAM) should be enough for anybody."

 - Bill Gates

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] multiple instances with different outgoing addresses and 2x external nics

2016-09-22 Thread Antony Stone
On Thursday 22 Sep 2016 at 06:47, Drikus Brits wrote:

> HI Experts,
> 
> I'm struggling to get squid to work the way i need it to.
> 
> My setup :
> 
> 1x Server : Ubuntu 14
> 3x Interfaces : 1x Inside ( 192.168.100.10 ) 2x Outside connected to DSL
> (1st = 10.0.0.2, 2nd 10.0.1.2)
> 2x default routes : 1x for each DSL link

Have you configured IProute2 or similar to use both "default" routes?

If not, and you have simply told the Linux kernel that there are two default 
routes, it will only use the first one.

> Management uses proxy address : 192.168.100.10 3128
> All else uses address : 192.168.100.10 3129
> 
> Both instances have their own configuration file and squid starts both
> instances without issues. the mngt instance is configured to use
> tcp_outgoing_address : 10.0.0.2 and all_else instance configured to use
> tcp_outgoing_address : 10.0.1.2, but when i test a website that reveals
> your outside IP, it always seems to only go out via the 1 DSL network
> and not the other.

If you have simply told the Linux kernel that there are two default routes, it 
will only use the first one.

> If i remove the default route to DSL1, then both instances works via
> DSL2. My thoughts was that if the outgoing_address is 10.0.0.2 it should
> go out via DSL1 and if outgoing_address is 10.0.1.2 it should go via
> DSL2.

Sounds very much like you need to configure IProute2 to use both paths.  Look 
up LARTC for documentation.

> If it try to use an outgoing address that is not the IP of the
> configured eth interface, then it complains about binding issues.

Well, yes.

> I'm not using any firewalls of sorts to manipulate routing at this
> stage. I really would prefer to use 1x VM (squid) instead of 2 seperate
> VMs running squid...
> 
> Any suggestions?

See above :)


Antony.

-- 
"The future is already here.   It's just not evenly distributed yet."

 - William Gibson

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users