Re: [squid-users] Squid Source Code: What files/functions receive/send packets from/to hardware

2015-02-06 Thread Priya Agarwal
Actually I am unable to mail to squid-dev. Thus asking here.
How/where does squid open the network interface and starts listening on
them.

Regards
On Fri, Feb 6, 2015 at 12:57 PM, Priya Agarwal 
wrote:

> Hi,
> I needed some direction again. I also need to know where in the source
> code does squid open the network interface before it reads/writes from it.
> Thanks.
>
> Regards
>
> On Tue, Jan 6, 2015 at 11:37 AM, Priya Agarwal 
> wrote:
>
>> Thanks a lot. :)
>> I'll sign up for squid-dev mailing list and do any further discussions
>> there.
>>
>>
>> On Tue, Jan 6, 2015 at 12:13 AM, Amos Jeffries 
>> wrote:
>>
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA1
>>>
>>> On 6/01/2015 6:01 a.m., Priya Agarwal wrote:
>>> > Thank you for the reply.
>>> >
>>> > I do not intend to change its functionality. I just want to make it
>>> > run on a processor (Freescale's T4240). For that it has to use some
>>> > new architectural features (Data Path Acceleration Architecture)
>>> > which are a part of the processor.
>>> >
>>> > For e.g. suppose squid was merely swapping ipv4/mac src and dest
>>> > addresses( just an example! ) in the packet header and sends it
>>> > back. So I don't want to change what it does, I just want squid to
>>> > send whatever data it has prepared to a memory location. Basically
>>> > instead of receiving and sending to OS Stack, I want to it read and
>>> > write from memory.  (Further details : This memory is basically a
>>> > memory-mapped device which is further responsible for transmitting
>>> > the frame to a network interface, ethernet)
>>> >
>>> > So maybe if I could know where in the source code does it
>>> > communicate with the OS stack.
>>>
>>> Ah, that kind of packet handling is all much, much lower level than
>>> Squid.
>>>
>>> Squid uses functions provided by the POSIX system API with socket
>>> handles/"filedescriptors".
>>>
>>> http://man7.org/linux/man-pages/man7/socket.7.html
>>> http://man7.org/linux/man-pages/man2/read.2.html
>>> http://man7.org/linux/man-pages/man2/write.2.html
>>>
>>> The functions in src/fd.cc are where that happens, the lowest
>>> networking-I/O level of Squid.
>>>
>>>
>>> NP: If you want to take this further and/or discuss any other feature
>>> additions/changes I encourage you to sign up to squid-dev mailing list
>>> and discuss it with the whole dev team. This list is for general user
>>> discussions, (though sometimes code talk from someone doing bug
>>> investigations does slip in).
>>>
>>> Amos
>>> -BEGIN PGP SIGNATURE-
>>> Version: GnuPG v2.0.22 (MingW32)
>>>
>>> iQEcBAEBAgAGBQJUqts4AAoJELJo5wb/XPRjJu4IANCAhXt7PM48xHkERw/mz/sa
>>> heSoEC7nJXaLpRz9JSq085UY2F0p55/GH0Gdh+XLq0h6kMLZB8qYTify0WMp18Jj
>>> sfVOhuURZVdabS5ldW2ZrqftsqCt5eu3JvgHtQ+ewp8RhN5OI6OTQbCL9sf6llWY
>>> Vqm1gno2IymyjcsuPYzpDv7sw7M3odJSETAD2eh6E3r+++uN1q5R4JZ7GZt5mgaE
>>> tIJF0DyOlhAfAJIniCImZ0MDqWHLWjrzLWZRazHg4zA025TinDxkjy+qfHnjYfDc
>>> oIFh9t/mtSbHPnBiZmsyUDOfl/wB0EOMuXEItoLGgBLh/71huVf2QRmfXUr6qS8=
>>> =oajV
>>> -END PGP SIGNATURE-
>>>
>>
>>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] login expired

2015-02-06 Thread Ignazio Raia
Good morning Amos,
here is my squid.conf, basic_db_auth script and the shell test. 
thanks a lot for your interesting and help.

TEST MADE FROM VIA ssh CONNECTION TO MY LAMP & SQUID SERVER (ssh
ignazio@192.168.2.1)
$ sudo /usr/lib/squid3/basic_db_auth --user root --password rootpasswd --md5
--cond "1" --persis

ignazio 12345678(wrong password)
ERR login failure

ignazio mypassword  (right password)
OK

# MY SQUID.CONF
# OPTIONS FOR AUTHENTICATION
auth_param basic program /usr/lib/squid3/basic_db_auth --user root
--password rootpasswd -md5 --cond "1" --persis 
#auth_param basic program /usr/lib/squid3/basic_ncsa_auth
/etc/squid3/squid.pass

auth_param basic children 5
auth_param basic realm Squid Proxy Web Server
auth_param basic credentialsttl 60 seconds
#authenticate_cache_garbage_interval 1 hour
#authenticate_ttl 60 seconds

# MY ACCESS CONTROLS
#
-
acl localnet src 192.168.2.0/24 #my localnet
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher 
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl password proxy_auth REQUIRED

#  TAG: MY http_access
http_access deny !password
http_access deny !Safe_ports
http_access allow localhost manager
http_access deny CONNECT !SSL_ports
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all

# NETWORK OPTIONS
http_port 
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
# example lin deb packages
#refresh_pattern (\.deb|\.udeb)$   129600 100% 129600
refresh_pattern .   0   20% 4320

# HTTPD-ACCELERATOR OPTIONS
#
-
visible_hostname ubuntu-server

# DNS OPTIONS
#
-
dns_nameservers 62.94.0.41


#basic_db_auth script
#!/usr/bin/perl
use strict;
use DBI;
use Getopt::Long;
use Pod::Usage;
use Digest::MD5 qw(md5 md5_hex md5_base64);
$|=1;

=pod

=head1 NAME

basic_db_auth - Database auth helper for Squid

=cut

my $dsn = "DBI:mysql:database=squid";
my $db_user = "root";
my $db_passwd = "rootpasswd";
my $db_table = "passwd";
my $db_usercol = "user";
my $db_passwdcol = "password";
my $db_cond = "enabled = 1";
my $plaintext = 0;
my $md5 = 0;
my $persist = 0;
my $isjoomla = 0;
my $debug = 0;
my $hashsalt = undef;
etc etc



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/login-expired-tp4669574p4669607.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Blocking Chrome and QUIC

2015-02-06 Thread Luis Miguel Silva
Antony,

*Comments inline!*

Thanks,
Luis

On Fri, Feb 6, 2015 at 3:58 PM, Antony Stone <
antony.st...@squid.open.source.it> wrote:

> On Friday 06 February 2015 at 22:54:54 (EU time), Luis Miguel Silva wrote:
>
> > As I started playing around with transparent ssl proxying, I learned that
> > Chrome uses an alternate communication (UDP based) protocol called QUIC.
>
> I'd never heard of QUIC, and http://en.wikipedia.org/wiki/QUIC doesn't
> seem to
> give much technical information on how it works, however it certainly
> confirms
> that it's based on UDP.
>
> > The problem is that, although the rules seem to successfully be
> triggered,
> > the only way I can successfully BLOCK QUIC traffic and make the browser
> > fallback to HTTP/HTTPS is by setting a default FORWARD policy to DROP:
> > *iptables -P FORWARD DROP*
>
> Er, why is that not your standard setup?
>
> Allow what you know you want, drop the rest - that's standard security
> practice.
>
> If you do set the default forward policy to drop, what problems does this
> create?
>
*This is supposed to be a generic solution, whose main intent is to filter
http/https content (not to block "all other traffic").*
*If I block all traffic by default, things will stop working, so all I want
to block is whatever NEEDS to be blocked :o)*


>
> > So my question is: *how can I completely block QUIC so I can guarantee my
> > traffic will always be redirected to Squid?*
>
> 1. See above :)
>
*Unfortunately, not an acceptable solution :o(*

>
> 2. What UDP traffic do you want to permit, except port 53 to your (quite
> possibly local) DNS servers?
>
*Games, voip, etc...*

>
> Maybe you're using VoIP, with its associated RTSP traffic, but that's
> generally
> in the port range 2-3 or even higher, and will also be coming from
> quite specific devices (telephones), and usually also to quite specific
> destinations (SIP proxies).
>
> Therefore just block all UDP traffic which isn't known to be required.
>
*I would really rather not. I just want to figure out what ports does QUIC
use :o)*
*Unfortunately, the more I talk with people, the more I'm finding out that
most people don't have any idea what QUIC is (I now I didn't about 3 days
ago heheh).*

*I might just head on to the Chromium google group and ask there! (I just
posted here cause I was sure someone else had experienced the same problem
I am experiencing while doing transparent proxying).*

*Thanks,*
*Luis*

>
>
> Incidentally, as a general comment I would repeat the last sentence above
> without the qualifier "UDP" :)
>
>
> Regards,
>
>
> Antony.
>
> --
> Anyone that's normal doesn't really achieve much.
>
>  - Mark Blair, Australian rocket engineer
>
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] The SSL certificate database is corrupted. Please rebuild

2015-02-06 Thread Ortega Gustavo Martin
Any comments?

Thanks

-Mensaje original-
De: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] En nombre de 
Ortega Gustavo Martin
Enviado el: miércoles, 04 de febrero de 2015 03:05 p.m.
Para: squid-users@lists.squid-cache.org
Asunto: [squid-users] The SSL certificate database is corrupted. Please rebuild

Amos, thanks for your quick reply!

I ´ve got news:

i recompiled squid with your suggestions, remove the corrupted database but the 
same thing happens.

my squid -v now is:

Squid Cache: Version 3.4.11-20150124-r13214 configure options:  
'--prefix=/export/squid-3.4.11-20150124-r13214' '--with-maxfd=40' 
'--enable-delay-pools' '--with-large-files' '--enable-follow-x-forwarded-for' 
'--enable-default-err-language=es' '--enable-err-languages=es' 
'--enable-external-acl-helpers=wbinfo_group' '--enable-async-io' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-icap-client' '--enable-ltdl-convenience' 
'--with-openssl=/export/SOURCES/openssl-1.0.2'

The complete line of cache.log is:

2015/02/04 15:00:57 kid1| helperOpenServers: Starting 1/200 'ssl_crtd' 
processes wrong number of fields on line 8 (looking for field 6, got 1, '' left)
(ssl_crtd): The SSL certificate database () is corrupted. Please rebuild

Thanks, Gustavo.

-Mensaje original-
De: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] En nombre de 
Amos Jeffries Enviado el: miércoles, 04 de febrero de 2015 02:15 p.m.
Para: squid-users@lists.squid-cache.org
Asunto: Re: [squid-users] The SSL certificate database is corrupted. Please 
rebuild

On 5/02/2015 4:33 a.m., Ortega Gustavo Martin wrote:
> Hello, i found multiple times this error in cache.log and then squid 
> crashed and enter in a loop.
> 
> I found one corrupted line in "index.txt" in the database directory. 
> Last two lines are:
> 
> V   150828132043Z
> 1BDA35020BA8933E63507E7D5A59386C8329A3D3unknown
> /CN=zqnvza.bay.livefilestore.com+Sign=signTrusted ed
> 
> 
> I thought that "ed" is the corrupted line.
> 
> 
> This is my output of "squid -v" Squid Cache: Version
> 3.4.11-20150124-r13214 configure options:
> '--prefix=/export/squid-3.4.11-20150124-r13214' '--with-maxfd=40'
> '--enable-delay-pools' '--enable-referer-log'
> '--enable-useragent-log'

Referer and Useragent logs are now built-in logformat definitions.
Remove these ./configure options.

> '--enable-auth'

Auth is enabled by default, the ./configure option is defined for use to 
DISABLE authentication in Squid.

> '--with-large-files'
> '--enable-follow-x-forwarded-for'
> '--enable-default-err-language=Spanish'
> '--enable-err-languages=Spanish'

"Spanish" is not an ISO 3166 language code.

Use:  --enable-default-err-language=es


> '--enable-external-acl-helpers=wbinfo_group' '--enable-async-io'
> '--enable-ssl' '--enable-ssl-crtd' '--enable-icap-client'
> '--enable-ltdl-convenience'
> '--with-openssl=/export/SOURCES/openssl-1.0.1c' '--disable-ipv6'
> 

Please begin your migration to IPv6. BCP 177 (RFC 6540) make it clear that IPv6 
support is currenty mandatory for all machinery and software using IP protocol. 
Current versions of Squid have no problems with IPv6 (remaining problem are all 
in the network and workarounds are configurable).

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Blocking Chrome and QUIC

2015-02-06 Thread Antony Stone
On Friday 06 February 2015 at 22:54:54 (EU time), Luis Miguel Silva wrote:

> As I started playing around with transparent ssl proxying, I learned that
> Chrome uses an alternate communication (UDP based) protocol called QUIC.

I'd never heard of QUIC, and http://en.wikipedia.org/wiki/QUIC doesn't seem to 
give much technical information on how it works, however it certainly confirms 
that it's based on UDP.

> The problem is that, although the rules seem to successfully be triggered,
> the only way I can successfully BLOCK QUIC traffic and make the browser
> fallback to HTTP/HTTPS is by setting a default FORWARD policy to DROP:
> *iptables -P FORWARD DROP*

Er, why is that not your standard setup?

Allow what you know you want, drop the rest - that's standard security 
practice.

If you do set the default forward policy to drop, what problems does this 
create?

> So my question is: *how can I completely block QUIC so I can guarantee my
> traffic will always be redirected to Squid?*

1. See above :)

2. What UDP traffic do you want to permit, except port 53 to your (quite 
possibly local) DNS servers?

Maybe you're using VoIP, with its associated RTSP traffic, but that's generally 
in the port range 2-3 or even higher, and will also be coming from 
quite specific devices (telephones), and usually also to quite specific 
destinations (SIP proxies).

Therefore just block all UDP traffic which isn't known to be required.


Incidentally, as a general comment I would repeat the last sentence above 
without the qualifier "UDP" :)


Regards,


Antony.

-- 
Anyone that's normal doesn't really achieve much.

 - Mark Blair, Australian rocket engineer

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Blocking Chrome and QUIC

2015-02-06 Thread Luis Miguel Silva
Dear all,

This isn't entirely a squid question but more like a "transparent proxying"
question (which I'm hoping you guys will be able to help me with)...

As I started playing around with transparent ssl proxying, I learned that
Chrome uses an alternate communication (UDP based) protocol called QUIC.

When the browser uses that protocol, Squid obviously isn't used as a proxy,
so I'm trying to block QUIC traffic to force the browsers to fall back to
HTTP/HTTPS.

At first, I found out that QUIC communicates over UDP 443 but, since
blocking traffic from going out on that port didn't seem to work, I decided
to use TCPView
 (on the
client computer) and look at tcpdump to try and figure out what other ports
does it use...

After looking at TCPView, I was able to see traffic going out on:
tcp 80
tcp 443
tcp 5228
udp 80
udp 443
udp 5353

...so I tried to block traffic going out on those ports:
root@appliance:~# cat /etc/iptables/rules.v4 | grep -i forward
:FORWARD DROP [41:4010]
-A FORWARD -i br0 -p tcp -m tcp --dport 5228 -j REJECT --reject-with
icmp-port-unreachable
-A FORWARD -i br0 -p udp -m udp --dport 5353 -j REJECT --reject-with
icmp-port-unreachable
-A FORWARD -i br0 -p udp -m udp --dport 80 -j REJECT --reject-with
icmp-port-unreachable
-A FORWARD -i br0 -p udp -m udp --dport 443 -j REJECT --reject-with
icmp-port-unreachable
root@appliance:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
REJECT tcp  --  anywhere anywhere tcp dpt:5228
reject-with icmp-port-unreachable
REJECT udp  --  anywhere anywhere udp dpt:mdns
reject-with icmp-port-unreachable
REJECT udp  --  anywhere anywhere udp dpt:http
reject-with icmp-port-unreachable
REJECT udp  --  anywhere anywhere udp dpt:https
reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
root@appliance:~# iptables -L -n -v
Chain INPUT (policy ACCEPT 6182 packets, 2536K bytes)
 pkts bytes target prot opt in out source
destination

Chain FORWARD (policy ACCEPT 1343 packets, 160K bytes)
 pkts bytes target prot opt in out source
destination
   18   912 REJECT tcp  --  br0*   0.0.0.0/0
0.0.0.0/0tcp dpt:5228 reject-with icmp-port-unreachable
  100 30714 REJECT udp  --  br0*   0.0.0.0/0
0.0.0.0/0udp dpt:5353 reject-with icmp-port-unreachable
0 0 REJECT udp  --  br0*   0.0.0.0/0
0.0.0.0/0udp dpt:80 reject-with icmp-port-unreachable
   73 87052 REJECT udp  --  br0*   0.0.0.0/0
0.0.0.0/0udp dpt:443 reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT 6913 packets, 2386K bytes)
 pkts bytes target prot opt in out source
destination
root@appliance:~#

The problem is that, although the rules seem to successfully be triggered,
the only way I can successfully BLOCK QUIC traffic and make the browser
fallback to HTTP/HTTPS is by setting a default FORWARD policy to DROP:
*iptables -P FORWARD DROP*

What I conclude from this is that there MUST be some more FORWARD traffic
being originated at Chrome that I have no idea how to catch and filter.

So my question is: *how can I completely block QUIC so I can guarantee my
traffic will always be redirected to Squid?*

Thanks in advance,
Luis
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correct order of acl rules?

2015-02-06 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
https://regex101.com/

is great resource..

Hm?

07.02.2015 2:06, Walter H. пишет:
> On 06.02.2015 20:38, Amos Jeffries wrote:
>> On 7/02/2015 8:27 a.m., Amos Jeffries wrote:
>>> On 7/02/2015 8:19 a.m., Walter H. wrote:
 the file blockurls-regex-acl.squid
 contains e.g.
 ^http:\/\/s[0-9]\.domain\.tld\/

 the file allowurls-regex-acl.squid
 contains e.g.
 ^http:\/\/s[1-2]+\.domain\.tld\/[a-z0-9\_\-\.]+\.gif

 the purpose should be, that only gif images of root directory of only
 the subdomains beginning with s1 or s2 of domain.tld should be allowed
>> Also, NO that is not what your rules do. Take another look at all the
>> sub-domains s[1-2]+ will match against...
>>
>> s1, s2,
>> s11, s12, s21, s22,
>> s111, s112, s121, s122, s211, s212, s221, s222,
>> s ...
>> ...
>>
>>
> of, course my mistake ...
 the following url is blocked

 http://s2443.domain.tld/ghfhfhf.gif

 why?
>>> "4" != "\."
>>>
> thats right ...
>
> should the following be allowed or blocked?
> http://s1.domain.tld/file.gif
> http://s2.domain.tld/file.gif
>
> I'd say the must be allowed ...
>
> Greetings,
> Walter
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBAgAGBQJU1SoQAAoJENNXIZxhPexGlCwIALzTWCzuQjVd9zBVX8NdvEav
dlH0sAAJ8mzTOinFLFfW6djpZKMaXx68NQmNBnSBpg21+N/+Ado6lktzASu2Ktie
qg+GEZ+tZ8G9bn4vgowb+htoF7btRkH/ahzPq82x56qwomwc5uwMOQUCL07aI+mj
Uqd+ot+/BIvfKQkgjmR0rFenpexHASTMJdsbDpAd+jkIAUqhOESORgbm/if9s/7u
oreRwTHw+IUjcsbWJfxMP3zP/fUs/WBa3utX7zn2ngXv+hEmEMM9krUxHCF/nfOP
+XAYpQNVsmG9v4zgi3iHNtFQt19DkgYvr6uQMq8qCzDPIjirvORw+IiOMHXE9Ls=
=rBi7
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correct order of acl rules?

2015-02-06 Thread Walter H.

On 06.02.2015 20:38, Amos Jeffries wrote:

On 7/02/2015 8:27 a.m., Amos Jeffries wrote:

On 7/02/2015 8:19 a.m., Walter H. wrote:

the file blockurls-regex-acl.squid
contains e.g.
^http:\/\/s[0-9]\.domain\.tld\/

the file allowurls-regex-acl.squid
contains e.g.
^http:\/\/s[1-2]+\.domain\.tld\/[a-z0-9\_\-\.]+\.gif

the purpose should be, that only gif images of root directory of only
the subdomains beginning with s1 or s2 of domain.tld should be allowed

Also, NO that is not what your rules do. Take another look at all the
sub-domains s[1-2]+ will match against...

s1, s2,
s11, s12, s21, s22,
s111, s112, s121, s122, s211, s212, s221, s222,
s ...
...



of, course my mistake ...

the following url is blocked

http://s2443.domain.tld/ghfhfhf.gif

why?

"4" != "\."


thats right ...

should the following be allowed or blocked?
http://s1.domain.tld/file.gif
http://s2.domain.tld/file.gif

I'd say the must be allowed ...

Greetings,
Walter



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correct order of acl rules?

2015-02-06 Thread Amos Jeffries
On 7/02/2015 8:27 a.m., Amos Jeffries wrote:
> On 7/02/2015 8:19 a.m., Walter H. wrote:
>> the file blockurls-regex-acl.squid
>> contains e.g.
>> ^http:\/\/s[0-9]\.domain\.tld\/
>>
>> the file allowurls-regex-acl.squid
>> contains e.g.
>> ^http:\/\/s[1-2]+\.domain\.tld\/[a-z0-9\_\-\.]+\.gif
>>
>> the purpose should be, that only gif images of root directory of only
>> the subdomains beginning with s1 or s2 of domain.tld should be allowed

Also, NO that is not what your rules do. Take another look at all the
sub-domains s[1-2]+ will match against...

s1, s2,
s11, s12, s21, s22,
s111, s112, s121, s122, s211, s212, s221, s222,
s ...
...


>>
>> the following url is blocked
>>
>> http://s2443.domain.tld/ghfhfhf.gif
>>
>> why?
> 
> "4" != "\."
> 

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Correct order of acl rules?

2015-02-06 Thread Amos Jeffries
On 7/02/2015 8:19 a.m., Walter H. wrote:
> the file blockurls-regex-acl.squid
> contains e.g.
> ^http:\/\/s[0-9]\.domain\.tld\/
> 
> the file allowurls-regex-acl.squid
> contains e.g.
> ^http:\/\/s[1-2]+\.domain\.tld\/[a-z0-9\_\-\.]+\.gif
> 
> the purpose should be, that only gif images of root directory of only
> the subdomains beginning with s1 or s2 of domain.tld should be allowed
> 
> the following url is blocked
> 
> http://s2443.domain.tld/ghfhfhf.gif
> 
> why?

"4" != "\."

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Correct order of acl rules?

2015-02-06 Thread Walter H.

Hello,

my squid.conf contains the following lines - in this order ...

acl allow_urlpaths urlpath_regex -i 
"/etc/squid/allowurlpaths-regex-acl.squid"
acl block_urlpaths urlpath_regex -i 
"/etc/squid/blockurlpaths-regex-acl.squid"

acl allow_urls url_regex -i "/etc/squid/allowurls-regex-acl.squid" <--
acl block_urls url_regex -i "/etc/squid/blockurls-regex-acl.squid" <--
acl allow_domains_list dstdomain "/etc/squid/allowdomains-list-acl.squid"
acl block_domains_list dstdomain "/etc/squid/blockdomains-list-acl.squid"
acl block_domains_listex dstdomain 
"/etc/squid/blockdomains-listex-acl.squid"
acl allow_domains_regex dstdom_regex -i 
"/etc/squid/allowdomains-regex-acl.squid"
acl block_domains_regex dstdom_regex -i 
"/etc/squid/blockdomains-regex-acl.squid"

deny_info ERR_URL_BLOCKED block_urlpaths
deny_info ERR_URL_BLOCKED block_urls
deny_info ERR_DOMAIN_BLOCKED block_domains_list
deny_info ERR_DOMAIN_BLOCKED block_domains_listex
deny_info ERR_DOMAIN_BLOCKED block_domains_regex
http_access allow allow_urlpaths
http_access deny block_urlpaths
http_access allow allow_urls <--
http_access deny block_urls <--
http_access allow allow_domains_list
http_access deny block_domains_list
http_access deny block_domains_listex
http_access allow allow_domains_regex
http_access deny block_domains_regex

I marked 4 lines and I get a quite strange - or correct - behaviour ...

the file blockurls-regex-acl.squid
contains e.g.
^http:\/\/s[0-9]\.domain\.tld\/

the file allowurls-regex-acl.squid
contains e.g.
^http:\/\/s[1-2]+\.domain\.tld\/[a-z0-9\_\-\.]+\.gif

the purpose should be, that only gif images of root directory of only 
the subdomains beginning with s1 or s2 of domain.tld should be allowed


the following url is blocked

http://s2443.domain.tld/ghfhfhf.gif

why?

Thanks,
Walter




smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tunnelled devices losing access to squid

2015-02-06 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
I have one ;)

http://i.imgur.com/VaPu6pq.png


06.02.2015 21:15, Amos Jeffries пишет:
> On 7/02/2015 3:37 a.m., Raymond Norton wrote:
>> I have the following scenario:
>>
>>
>>
>>  We have a number of Verizon Aps configured to run associated devices
>> through a GRE
>> tunnel between Verizon and our network, using a 10.99.0.0/16 subnet which
>> is NATed to a public address. Policy based routing sends all
>> port 80 and 443 traffic originating from 10.99.0.0/16 to qlproxy IP
>> (10.10.1.85) (squid proxy). IPtables on qlproxy box port-forwards all 80
>> and 443 traffic to 3126 & 3127. Qlproxy (4.0) has appropriate
>> transparent and ssl_bump rules to process incoming traffic.
>>
>>
>>
>>
>> Squid logs show the request for web pages is made via the policy based
>> routing (Mikrotik Firewall/Router), but nothing is returned to the
>> requesting device. It just simply times out after a long wait.
>>
>
> Considered Path-MTU discovery?
>
> Make sure that ICMP (and ICMPv6) are enabled and working on all networks
> the traffic traverses between Squid and the devices.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBAgAGBQJU1PyJAAoJENNXIZxhPexGJ0kH/07GQNdoSqXlhH9iduf7TJBC
KVWHy1GpHrYmL8CPpvWy64Am5ccczmFgSVxnyLTzC6x/o8b5pSHswYm6XvBsJQYM
gOeAau3i1RHjQQcU8nWwA5K8mFumJvcjvyPt+ImY4Kx+x32nNfRVpgjq2SHzb3gJ
LVNIygHzYb1C3VoRNCCoAU17eFKoJcSRhcIa9TyVjo6Yaxs8Xmg4Zg8zIO+4qwKJ
2dmEFMKDJ6so55OxnaEjoU/1MLjJditNXGkQbjLYaXc5o4ASCC5a6k+xvP8ApYhq
VQFRKv92TAHaoF6ciyj/VVx+vD8U7IS6OmPeeaAa1Ij/tGcawVerGT/ZrPVoYj8=
=r6b4
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tunnelled devices losing access to squid

2015-02-06 Thread Amos Jeffries
On 7/02/2015 3:37 a.m., Raymond Norton wrote:
> I have the following scenario:
> 
> 
> 
>  We have a number of Verizon Aps configured to run associated devices
> through a GRE
> tunnel between Verizon and our network, using a 10.99.0.0/16 subnet which
> is NATed to a public address. Policy based routing sends all
> port 80 and 443 traffic originating from 10.99.0.0/16 to qlproxy IP
> (10.10.1.85) (squid proxy). IPtables on qlproxy box port-forwards all 80
> and 443 traffic to 3126 & 3127. Qlproxy (4.0) has appropriate
> transparent and ssl_bump rules to process incoming traffic.
> 
> 
> 
> 
> Squid logs show the request for web pages is made via the policy based
> routing (Mikrotik Firewall/Router), but nothing is returned to the
> requesting device. It just simply times out after a long wait.
> 

Considered Path-MTU discovery?

Make sure that ICMP (and ICMPv6) are enabled and working on all networks
the traffic traverses between Squid and the devices.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Tunnelled devices losing access to squid

2015-02-06 Thread Raymond Norton

I have the following scenario:



 We have a number of Verizon Aps configured to run associated devices through a 
GRE
tunnel between Verizon and our network, using a 10.99.0.0/16 subnet which
is NATed to a public address. Policy based routing sends all
port 80 and 443 traffic originating from 10.99.0.0/16 to qlproxy IP
(10.10.1.85) (squid proxy). IPtables on qlproxy box port-forwards all 80
and 443 traffic to 3126 & 3127. Qlproxy (4.0) has appropriate
transparent and ssl_bump rules to process incoming traffic.




Squid logs show the request for web pages is made via the policy based
routing (Mikrotik Firewall/Router), but nothing is returned to the
requesting device. It just simply times out after a long wait.



However, if I configure a tunnelled device to use port 3128 in the proxy
settings of the browser, or if a tunnelled device requests the proxy url
via port 80, web requests start working, as expected for the configured
device , as well as for all devices that are hitting the proxy
transparently from the tunnel.



This will work as long as some form of traffic from the tunnelled
devices is generated. If things are left dormant for 3-5 minutes traffic
will stop working again, until a device requests the proxy url via port
80. As a workaround to minimize complaints I created a cron job, using
wget of the proxy url, which runs every couple minutes. As long as the
wget command runs, Internet works fine for all tunnelled devices.



On a side note, policy routing of local 10.10.0.0/16 devices works just
fine running through the proxy transparently, without interruptions,
even when the tunnelled devices cease working. Internet works fine if we
send tunnelled traffic through and NAT the same as the 10.10.0.0/16 network, 
bypassing the proxy





Squid config:



icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Client-Username
icap_service_failure_limit -1
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
acl qlproxy_icap_edomains dstdomain 
"/opt/qlproxy/etc/squid/icap_exclusions_domains.conf"
acl qlproxy_icap_etypes rep_mime_type 
"/opt/qlproxy/etc/squid/icap_exclusions_contenttypes.conf"
adaptation_access qlproxy1 deny qlproxy_icap_edomains
adaptation_access qlproxy2 deny qlproxy_icap_edomains
adaptation_access qlproxy2 deny qlproxy_icap_etypes
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all
acl localnet src 10.0.0.0/8
acl localnet src 172.16.0.0/12
acl localnet src 192.168.0.0/16
acl localnet src fc00::/7
acl localnet src fe80::/10
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
include "/opt/qlproxy/etc/squid/squid.acl"
http_port  3126 transparent
https_port 3127 transparent ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/opt/qlproxy/etc/myca.pem
http_port  3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/opt/qlproxy/etc/myca.pem
sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/spool/squid3_ssldb -M 4MB
forward_max_tries 25
cache_mem 1024 MB
maximum_object_size_in_memory 1024 KB
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
shutdown_lifetime 3 seconds
visible_hostname qlproxy
always_direct allow all
icap_enable on
icap_service_failure_limit -1
icap_preview_enable on
icap_persistent_connections on
adaptation_send_client_ip on
adaptation_send_username on
icap_service qlproxy1 reqmod_precache icap://127.0.0.1:1344/reqmod bypass=0
icap_service qlproxy2 respmod_precache icap://127.0.0.1:1344/respmod bypass=0
acl qlproxy_icap_edomains dstdomain 
"/opt/qlproxy/etc/squid/icap_exclusions_domains.conf"
acl qlproxy_icap_etypes rep_mime_type 
"/opt/qlproxy/etc/squid/icap_exclusions_contenttypes.conf"
adaptation_access qlproxy1 deny qlproxy_icap_edomains
adaptation_access qlproxy2 deny qlproxy_icap_edomains
adaptation_access qlproxy2 deny qlproxy_icap_etypes
acl icap_bypass_to_localnet dst 10.0.0.0/8
acl icap_bypass_to_localnet dst 172.16.0.0/12
acl icap_bypass_to_localnet dst 192.168.0.0/16
adaptation_access qlproxy1 deny icap_bypass_to_localnet
adaptation_access qlproxy2 deny icap_bypass_to_localnet

Re: [squid-users] Problems with squid 3.5.1

2015-02-06 Thread Stefano Ansaloni
Tested with icap disabled: the issue still there.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with squid 3.5.1

2015-02-06 Thread FredB


> I'm using icap (for clamav).
> 


Please, can you make a try without ?




Regards,

Fred

http://numsys.eu
http://e2guardian.org

 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with squid 3.5.1

2015-02-06 Thread Stefano Ansaloni
I'm not using authentication (the proxy doesn't require any login/password).
I'm using icap (for clamav).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Custom requirement from Squid proxy logs

2015-02-06 Thread Amos Jeffries
On 6/02/2015 10:23 a.m., l...@technomicssolutions.com wrote:
> Actually, I have multiple websites and some using Google Analytics and some 
> uses Adobe. That is why I concentrated on SquidProxy as it logs corresponding 
> entries for all types of analytics. Just to make a analytics independent, can 
> we have any sort of solution..
> 

Why???  Seriously WHY ?

You are measuring the sub-set of users willing for the analytics
companies to measure *them*. AND the subset out of that of just what the
analytics are measuring.

Whereas you have a reverse-proxy right?
*all* users go through that to your sites, you have access in the proxy
to far more information than the analytics sites ever could.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL-bump certificate issues (mostly on Chrome, when accessing Google websites)

2015-02-06 Thread Amos Jeffries
On 6/02/2015 9:32 p.m., Amos Jeffries wrote:
> On 6/02/2015 6:10 p.m., Luis Miguel Silva wrote:
>> Dear all,
>>
>> I recently compiled squid-3.4.9 with ssl-bump support and, although it is
>> working for the most part, I'm having some issues accessing some websites.
>>
>> The behavior is REALLY weird so I'm going to try and describe it the best I
>> can:
>> - If i access https://www.google.com/ in Chrome, I could see that it was
>> processing my certificate MOST of the times...
>> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg
>> - some other times, it seemed to bypass my proxy altogether and I finally
>> figured out it was because Chrome will try to access QUIC enabled websites
>> using that protocol, so it would bypass my firewall redirect rules! I
>> believe I now have solved this by blocking FORWARDING traffic on port 443
>> udp...
> 
> reply_header_access Alternate-Protocol deny all
> 
> This was added by default in 3.5. Your report now is the final straw for
> me I'm backporting it to 3.4 now for adding in the next security release.

Meh, forgetful. Last straw was a while back. It's in 3.4.10 and later.

So ... "please upgrade to a current release", blah blah blah.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] derive HTTP/HTTPS upload traffic to a secondary interface.

2015-02-06 Thread Amos Jeffries
On 6/02/2015 8:59 p.m., Josep Borrell wrote:
> Hi,
> 
> I have a squid box with two interfaces. One ADSL 20/1Mb and one SHDSL 4/4Mb.
> It is a school and they are working with Google Apps for Education.
> They do a lot of uploading and when using the ADSL, it collapses promptly.
> Is possible to derive only HTTP/HTTPS upload traffic to the SHDSL and 
> continue surfing with the ADSL ?

In a roundabout way.

If you look at the OSI model of networking Squid is layers 4-7, and
those interfaces are part of layer 1-2. There is a whole disconnect
layer 3 in between (the TCP/IP layer).

What you can do in Squid is set one of the tcp_outgoing_address,
tcp_outgoing_tos, tcp_outgoing_mark directives to label the TCP traffic
out of Squid. The systems routing rules need to take that detail from
TCP and decide which interface to use.



> Maybe using one acl with methods POST and UPLOAD and some routing magic ?

Somethign like this..

squid.conf:
 acl PUTPOST method PUT POST
 tcp_outgoing_address 192.0.2.1 PUTPOST

Where 192.0.2.1 is the IP address the system uses to send out SHDSDL.
You may need both an IPv4 and IPv6 outgoing address set using PUTPOST acl.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with squid 3.5.1

2015-02-06 Thread FredB

I forgot, are you using ICAP protocol (AV)



Regards,

Fred

http://numsys.eu
http://e2guardian.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] login expired

2015-02-06 Thread FredB

> 2) Due to the above problem I configured an access control via
> htpasswd
> using basic_ncsa_auth.
> In this case, after the required credentials and the correct
> insertion squid
> gives me access to the internet.
> Now the question is: can I have the credentials expire after a
> certain time?


I don't known with recent version (3.4 or 3.5) but I guess you can't, there was 
a way in Wiki but it doesn't works good, so I made a little patch 
http://numsys.eu/divers/squid/auth.patch for basic auth, but I can't say if it 
works with the latest Squid ...
Eg: credentialsttl 30 minutes -> After 30 minutes the pop-up appears, very 
useful to protect your access, the "bad" and hidden requests are banned when 
the user is gone (in my case spywares and other plugins), also with high load 
it reduce the Squid works, because many users are disconnected.

With credentialsttl 10 hours, the users are connected the working day and when 
someone is missing, and his browser is open, his requests are denied (407)

Perhaps a proper way is to create a new option like authentificationttl related 
with CRED_BANNED (new value in my patch)



Regards,

Fred

http://numsys.eu
http://e2guardian.org

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] login expired

2015-02-06 Thread Amos Jeffries
On 6/02/2015 11:43 a.m., Ignazio Raia wrote:
> This post has NOT been accepted by the mailing list yet. 
> 
> Hello everyone, 
> I installed a Squid proxy server and it works perfectly. 
> I have two questions to ask about the authentication process. 
> 1) I configured the basic_db_auth, but the browser keeps asking login and
> password even though it is right. In this regard I run the script from the
> shell that responds correctly. The file basic_db_auth is in / usr / lib /
> squid3. I just changed the parameters related to my mysql db (db name, user,
> table name, etc.). 
> Can anyone help me and tell me where am I wrong? 

We need to see your squid.conf contents to answer that.

NP: if you are on one of those OS who insist on overwriting squid.conf
with the 270KB documentation file, please drop the comments.
  grep -v -E "^($|#)" squid.conf


At a guess it means the DB could not be connected to, or you forgot
about the --cond parameter default value.


> 
> 2) Due to the above problem I configured an access control via htpasswd
> using basic_ncsa_auth. 
> In this case, after the required credentials and the correct insertion squid
> gives me access to the internet. 
> Now the question is: can I have the credentials expire after a certain time?
> I tried to set credentialttl = 300 seconds, but spent the time with no
> activity I do not receive a new login request. 
> The parameter credentialttl is designed for this purpose?

Yes.

If authentication is working properly you/user should only ever see one
login at start and never again.
The browser is constantly delivering updated/current credentials and
Squid re-verifying those credentials via the helper whenever the TTL
expires or they actually change. But none of that complexity is relevant
to the user - they have not changed.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with squid 3.5.1

2015-02-06 Thread FredB

> 
> 
> @FrebB:
> I really don't know what identification helper is (I'm not a squid
> guru, please explain or drop a link).
> I'm on firefox 31.4.0esr (slackware linux 13.1).
> 


I mean Authentication from Squid, a pop-up with account (login and password)


> @Eliezer:
> As FredB said, the issue comes up randomly for both http and https
> sites (fun fact: if I try to reload a couple of times the website
> after receiveing the "cannot connect to proxy" page, the site loads
> normally).
> 


Yes me too, and hard to investigate it's a random problem

 



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL-bump certificate issues (mostly on Chrome, when accessing Google websites)

2015-02-06 Thread Amos Jeffries
On 6/02/2015 6:10 p.m., Luis Miguel Silva wrote:
> Dear all,
> 
> I recently compiled squid-3.4.9 with ssl-bump support and, although it is
> working for the most part, I'm having some issues accessing some websites.
> 
> The behavior is REALLY weird so I'm going to try and describe it the best I
> can:
> - If i access https://www.google.com/ in Chrome, I could see that it was
> processing my certificate MOST of the times...
> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg
> - some other times, it seemed to bypass my proxy altogether and I finally
> figured out it was because Chrome will try to access QUIC enabled websites
> using that protocol, so it would bypass my firewall redirect rules! I
> believe I now have solved this by blocking FORWARDING traffic on port 443
> udp...

reply_header_access Alternate-Protocol deny all

This was added by default in 3.5. Your report now is the final straw for
me I'm backporting it to 3.4 now for adding in the next security release.

NOTE that this firewall bypass behaviour it seems does not qualify for a
CVE security rating because it is an intentional *designed* behaviour of
Chrome using a designed feature of HTTP.



> - the weird thing is that, if I then try and access https://gmail.com, I
> get a certificate error:
> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg#1

Because with HTTPS traffic to a proxy the proxy sets up the tunnel using
a) CONNECT request, or b) intercepted port 443 - in both cases
encryption handling happens before any server response message with QUIC
headers gets involved.


> - ...though, sometimes, I can access https://mail.gmail.com/ just fine
> (without any certificate errors), but stop being able to as soon as I try
> to access https://gmail.com/ and the browser complains about the
> certificate.

The google TLS certificate I've read were issued with the CN label
"mail.google.com" plus wildcard for other G* domains. This might be
related to that behaviour if the main CN is mimic'd but the wildcards not.


> -- and, according to my tests, I can access it from firefox just fine MOST
> of the times:
> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg#2
> -- though I have also seen situations where Firefox also complains about a
> certificate error when connecting to gmail.com
> - and, although I cannot reproduce it 100% of the times, sometimes, even
> though I have my iptables redirect rules ON, the browser still seems to
> "connect direct" (or, at least, it shows it has the original certificate)!
> -- like I said, at first, I was able to trace this back to QUIC in Chrome
> but...I'm currently blocking traffic on port 443 udp so I don't know what's
> happening here (does it use different ports?!)

Very possible. Since the port is delivered in the Alternate-Protocol
header they can change it anytime. I've see both ports 80 and 443 in
use. Blocking the reply header is the way to be most sure of disabling.


> 
> So, here are *my questions*:
> - why am I able to successfully ssl-bump https://www.google.com but not
> https://gmail.com/
> - why does the Chrome freakout about gmail but not Firefox?

Many reasons. I point at HSTS - which is a collection of certificate
management methods and protocol bits they use to perform things like
cert pinning, side channel verification, etc.

At the base of it TLS was designed from the ground up to be a security
protocol that prevented anybody from hijacking it (like SSL-bump does),
or at least shout loudly to the endpoints if someone does. Only terribly
bad mis-use or security flaws in the protocol allow things like SSL-Bump
to work in the first place. You all have just been lucky so far that so
the Trusted CA system is a big flaw and mis-use of the protocol is rampant.

Google and friends are fighting to fix those flaws. Whenever they
succeed at closing one flaw the hjacking using it "stops working".


> - Is there a way to fix it OR, at least, to bypass it? (I tried creating an
> ACL for this and allowing direct traffic but it didn't seem to work...)
> -- can we make the connection go direct when ssl certificate errors are
> detected?

Lets be clear; *you* are the brokenness here, *you* are the attacker.

 ... when SSL-Bump "dont work" it means the security *is* working.

What you are looking for is not a "fix". It is another security flaw so
you can break their now-improved encryption again.

That should tell you what the answer is.


> - and has anyone else seen this problem where the browser seems to use the
> original certificate, even though I'm redirecting traffic to Squid?
> 
> Not sure if this is relevant, but here are some ssl errors I caught on my
> cache.log file:
> root@server:/var/log/squid3# tail cache.log
> 2015/02/05 21:47:52 kid1| clientNegotiateSSL: Error negotiating SSL
> connection on FD 30: Closed by client
> 2015/02/05 21:48:36 kid1| clientNegotiateSSL: Error negotiating SSL
> connection on FD 96: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1
> 

Re: [squid-users] R: Blocking hotshield vpn

2015-02-06 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
I'm not using linux. :)

Layer 7 filtering requires application-level proxy or DPI. We talking
about filtering, isn't it?

On Cisco this task requires a bit investigation (sniffing and
tcpiputils.com) and simple add some ACL's:

ip access-list extended TO_INET
 remark Network 100 is passed
 permit ip 192.168.100.0 0.0.0.255 any
 remark Hamachi
 deny   ip 25.0.0.0 0.255.255.255 any
 deny   ip 64.34.106.0 0.0.0.255 any
 deny   ip any host 69.25.21.195
 deny   ip any host 74.201.75.195
 deny   ip any host 146.255.195.92
 remark ZenMate servers
 deny   ip any 162.159.244.0 0.0.0.255
 deny   ip any 78.137.96.0 0.0.7.255
 deny   ip any 46.165.192.0 0.0.63.255
 deny   ip any 207.244.64.0 0.0.63.255
 deny   ip any 178.162.128.0 0.0.127.255
 deny   ip any 179.43.128.0 0.0.31.255
 deny   ip any 88.150.192.0 0.0.31.255
 deny   ip any 31.7.56.0 0.0.7.255
 deny   ip any 185.12.44.0 0.0.3.255
 deny   ip any 103.10.197.0 0.0.0.255
 deny   ip any 37.58.48.0 0.0.15.255
 deny   ip any 5.152.192.0 0.0.31.255
 deny   ip any 81.17.16.0 0.0.15.255
 deny   ip any 199.115.112.0 0.0.7.255
 deny   ip any 103.10.199.0 0.0.0.255
 remark Opera Turbo servers
 deny   ip any 37.228.104.0 0.0.7.255
 deny   ip any 141.0.8.0 0.0.7.255
 deny   ip any 82.145.208.0 0.0.15.255
 deny   ip any 195.189.142.0 0.0.1.255
 deny   ip any 185.26.180.0 0.0.3.255
 remark Ultrasurf port
 deny   tcp any any eq 9666
 remark Hola
 deny   ip any host 107.22.193.119
 deny   ip any host 54.225.121.9
 deny   ip any host 54.225.227.202
 deny   ip any host 54.243.128.120
 deny   tcp any any eq 6851
 deny   tcp any any eq 6861
 deny   ip any 107.155.75.0 0.0.0.255
 deny   ip any 103.18.42.0 0.0.0.255
 deny   ip any 103.27.232.0 0.0.0.255
 deny   ip any 103.4.16.0 0.0.0.255
 deny   ip any 103.6.87.0 0.0.0.255
 deny   ip any 104.131.128.0 0.0.15.255
 deny   ip any 106.185.0.0 0.0.127.255
 deny   ip any 106.186.64.0 0.0.63.255
 deny   ip any 106.187.0.0 0.0.63.255
 deny   ip any 107.155.85.0 0.0.0.255
 deny   ip any 107.161.144.0 0.0.7.255
 deny   ip any 107.170.0.0 0.0.127.255
 deny   ip any 107.181.166.0 0.0.0.255
 deny   ip any 107.190.128.0 0.0.15.255
 deny   ip any 107.191.100.0 0.0.3.255
 deny   ip any 108.61.208.0 0.0.1.255
 deny   ip any 109.74.192.0 0.0.15.255
 deny   ip any 128.199.128.0 0.0.63.255
 deny   ip any 14.136.236.0 0.0.0.255
 deny   ip any 149.154.157.0 0.0.0.255
 deny   ip any 149.62.168.0 0.0.3.255
 deny   ip any 151.236.18.0 0.0.0.255
 deny   ip any 158.255.208.0 0.0.0.255
 deny   ip any 162.213.197.0 0.0.0.255
 deny   ip any 162.217.132.0 0.0.3.255
 deny   ip any 162.218.92.0 0.0.1.255
 deny   ip any 162.221.180.0 0.0.1.255
 deny   ip any 162.243.0.0 0.0.127.255
 deny   ip any 167.88.112.0 0.0.3.255
 deny   ip any 168.235.64.0 0.0.3.255
 deny   ip any 173.255.192.0 0.0.15.255
 deny   ip any 176.58.96.0 0.0.31.255
 deny   ip any 176.9.0.0 0.0.255.255
 deny   ip any 177.67.81.0 0.0.0.255
 deny   ip any 178.209.32.0 0.0.31.255
 deny   ip any 178.79.128.0 0.0.63.255
 deny   ip any 192.110.160.0 0.0.0.255
 deny   ip any 192.121.112.0 0.0.0.255
 deny   ip any 192.184.80.0 0.0.7.255
 deny   ip any 192.211.49.0 0.0.0.255
 deny   ip any 192.241.160.0 0.0.31.255
 deny   ip any 192.30.32.0 0.0.3.255
 deny   ip any 192.34.56.0 0.0.7.255
 deny   ip any 192.40.56.0 0.0.0.255
 deny   ip any 192.73.232.0 0.0.7.255
 deny   ip any 192.81.208.0 0.0.7.255
 deny   ip any 192.99.0.0 0.0.255.255
 deny   ip any 198.147.20.0 0.0.0.255
 deny   ip any 198.211.96.0 0.0.15.255
 deny   ip any 198.58.96.0 0.0.31.255
 deny   ip any 199.241.28.0 0.0.3.255
 deny   ip any 208.68.36.0 0.0.3.255
 deny   ip any 209.222.30.0 0.0.0.255
 deny   ip any 213.229.64.0 0.0.63.255
 deny   ip any 217.170.192.0 0.0.15.255
 deny   ip any 217.78.0.0 0.0.15.255
 deny   ip any 23.227.160.0 0.0.0.255
 deny   ip any 23.249.168.0 0.0.1.255
 deny   ip any 23.29.124.0 0.0.0.255
 deny   ip any 31.193.128.0 0.0.15.255
 deny   ip any 31.220.24.0 0.0.3.255
 deny   ip any 37.139.0.0 0.0.31.255
 deny   ip any 37.235.52.0 0.0.0.255
 deny   ip any 41.215.240.0 0.0.0.255
 deny   ip any 41.223.52.0 0.0.0.255
 deny   ip any 46.17.56.0 0.0.7.255
 deny   ip any 46.19.136.0 0.0.7.255
 deny   ip any 46.246.0.0 0.0.127.255
 deny   ip any 46.38.48.0 0.0.7.255
 deny   ip any 46.4.0.0 0.0.255.255
 deny   ip any 5.9.0.0 0.0.255.255
 deny   ip any 50.116.32.0 0.0.15.255
 deny   ip any 66.85.128.0 0.0.63.255
 deny   ip any 74.82.192.0 0.0.31.255
 deny   ip any 77.237.248.0 0.0.1.255
 deny   ip any 81.4.108.0 0.0.3.255
 deny   ip any 85.234.128.0 0.0.31.255
 deny   ip any 88.150.156.0 0.0.3.255
 deny   ip any 91.186.0.0 0.0.31.255
 deny   ip any 92.222.0.0 0.0.255.255
 deny   ip any 92.48.64.0 0.0.63.255
 deny   ip any 94.76.192.0 0.0.63.255
 deny   ip any 95.215.44.0 0.0.3.255
 deny   ip any 96.126.96.0 0.0.7.255
 remark Browsec
 deny   ip any 178.62.64.0 0.0.63.255
 deny   ip any 188.226.128.0 0.0.127.255
 deny   ip any 128.199.192.0 0.0.63.255
 deny   ip any 104.131.0.0 0.

[squid-users] R: Blocking hotshield vpn

2015-02-06 Thread Job
Hello Yuri!

>>Only before Squid - using Cisco or something like.
>>Either Cisco acl's, or NBAR protocol discovery.

is there a way to implement a sort of layer 7 for hotshield vpn (or ultrasurf) 
working on Linux?

Thank you again!
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL-bump certificate issues (mostly on Chrome, when accessing Google websites)

2015-02-06 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
First. Where is you cache can found openssl public CA certs? To validate
connection from cache to server Squid must see root authority CA's.

I.e (from my configuration. Note: all google services bumped and works
perfectly):

https_port 3129 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/rootCA.crt
key=/usr/local/squid/etc/rootCA.key capath=/etc/opt/csw/ssl/certs

Second. OpenSSL CA's bundle is not complete. You must add ALL
intermediate and absent root CA's and make c_rehash.

Third.
Where is

sslproxy_cert_error allow all

and

sslproxy_flags DONT_VERIFY_PEER

in your configuration? Yes, this is dangerous, but permit to suppress
errors on some sites.

And finally - you can't bypass ssl bump on 3.4.x using dstdomain ACL's.
Only IP-based DST acl's usable.

Regards,
Yuri.

06.02.2015 11:10, Luis Miguel Silva пишет:
> Dear all,
>
> I recently compiled squid-3.4.9 with ssl-bump support and, although it
is working for the most part, I'm having some issues accessing some
websites.
>
> The behavior is REALLY weird so I'm going to try and describe it the
best I can:
> - If i access https://www.google.com/ in Chrome, I could see that it
was processing my certificate MOST of the times...
> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg
> - some other times, it seemed to bypass my proxy altogether and I
finally figured out it was because Chrome will try to access QUIC
enabled websites using that protocol, so it would bypass my firewall
redirect rules! I believe I now have solved this by blocking FORWARDING
traffic on port 443 udp...
> - the weird thing is that, if I then try and access https://gmail.com
, I get a certificate error:
> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg#1
> - ...though, sometimes, I can access https://mail.gmail.com/ just fine
(without any certificate errors), but stop being able to as soon as I
try to access https://gmail.com/ and the browser complains about the
certificate.
> -- and, according to my tests, I can access it from firefox just fine
MOST of the times:
> *screenshot here*: http://imgur.com/JsNiqDL,Ned5zAU,nJjRPtg#2
> -- though I have also seen situations where Firefox also complains
about a certificate error when connecting to gmail.com 
> - and, although I cannot reproduce it 100% of the times, sometimes,
even though I have my iptables redirect rules ON, the browser still
seems to "connect direct" (or, at least, it shows it has the original
certificate)!
> -- like I said, at first, I was able to trace this back to QUIC in
Chrome but...I'm currently blocking traffic on port 443 udp so I don't
know what's happening here (does it use different ports?!)
> 
> So, here are *my questions*:
> - why am I able to successfully ssl-bump https://www.google.com
 but not https://gmail.com/
> - why does the Chrome freakout about gmail but not Firefox?
> - Is there a way to fix it OR, at least, to bypass it? (I tried
creating an ACL for this and allowing direct traffic but it didn't seem
to work...)
> -- can we make the connection go direct when ssl certificate errors
are detected?
> - and has anyone else seen this problem where the browser seems to use
the original certificate, even though I'm redirecting traffic to Squid?
>
> Not sure if this is relevant, but here are some ssl errors I caught on
my cache.log file:
> root@server:/var/log/squid3# tail cache.log
> 2015/02/05 21:47:52 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 30: Closed by client
> 2015/02/05 21:48:23 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 30: Closed by client
> 2015/02/05 21:48:36 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 96: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1
alert unknown ca (1/0)
> 2015/02/05 21:48:54 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 105: Closed by client
> 2015/02/05 21:49:15 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 79: Broken pipe (32)
> 2015/02/05 21:49:15 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 54: Broken pipe (32)
> 2015/02/05 21:49:24 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 79: Closed by client
> 2015/02/05 21:49:55 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 26: Closed by client
> 2015/02/05 21:50:26 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 45: Closed by client
> 2015/02/05 21:50:56 kid1| clientNegotiateSSL: Error negotiating SSL
connection on FD 68: Closed by client
> root@server:/var/log/squid3#
>
> By the way, here's how I generated my certificate:
> openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout
myCA.pem -out myCA.pem
> openssl x509 -in myCA.pem -outform DER -out certificate.der
> (note: myCA.pem is the certificate that squid is using and
certificate.der is the one I