[squid-users] source address ip spoofing

2014-08-27 Thread Julian
Hello Squid Dev. Team and Users,

I need your advice on a Squid deployment scenario.

We have deployed on our network a physical machine with Squid 2.7 listening
on port 8080.
Proxy Auto-Discovery on our users browsers is able to get activated by a
wpad.dat file which transparently redirects our users HTTP requests to our
Proxy Server.

The way our Proxy Server works now is by hiding the IP address of users
getting directed to our machine.

Question is... can we have our Proxy Server working in the same deployment
scenario but doing Source IP Address Spoofing and making content requests
that do not hide users IP(s)?

Thank you,
Julian



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/source-address-ip-spoofing-tp4667417.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: source address ip spoofing

2014-08-28 Thread Julian
Hi Amos,

Thank you for the direction provided.

If I understand right, with the right configuration that can delete the
source IP from the header... it can be done.

Thanks again,
Julian



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/source-address-ip-spoofing-tp4667417p4667419.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: source address ip spoofing

2014-08-28 Thread Julian
Hi Amos,

We turned off X-Forwarded-For/Via headers and the effect was quite opposite
we expected :(
Now if a host checks What is my IP? against any online website it shows
only the IP address of the Proxy Server.

What we want is exactly the opposite.
When a host checks their IP we want them to see their own IP with no show of
the Proxy Server IP.
This way we don't have to log the hosts access because they will identify to
any Internet Server with their own IP. We don't want our users to hide
behind Proxy IP.

Would this be possible when using wpad redirection?
Is there a way to make Squid completely spoof source IP and become
transparent for the users?

Thank you for your help,
Julian




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/source-address-ip-spoofing-tp4667417p4667436.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: source address ip spoofing

2014-08-28 Thread Julian
Hi Eliezer,

I understand what you say, but we use external IPs for our network hosts
(nothing in 192.168.x.x range).
What I need is to direct the traffic to our proxy using the wpad mechanism
(which works just fine for us) but to make our proxy completely transparent
to external destinations.
I think TPROXY Squid might be a way to do it,  but we only use Squid 2.7
now.

We want to keep running with our Proxy in the same deployment scenario,
except that we need external Internet destinations to see the requests
coming from our hosts IP(s) instead of our Proxy.  

Thank you for your insights,
Julian




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/source-address-ip-spoofing-tp4667417p4667440.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: source address ip spoofing

2014-08-29 Thread Julian
Hi Amos,

You're right. 
That was the reason of my question here.
I thought maybe there is a way to do it and I don't see it.

The drive behind this question is related to the illegal activities that
some users can perform behind the proxy coverage. We wanted to avoid storage
of access logs but keep the network design we use now.

Thank you for all the details and the help,
Julian




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/source-address-ip-spoofing-tp4667417p4667461.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] IMAP support

2008-11-12 Thread julian julian
Hello, how can I access to IMAP (gmail IMAP for example)servers trough squid, I 
just add imap ports in squid.conf as Safe_port and SSL_port, but it does not 
work. 

Thanks




  


Re: [squid-users] IMAP support

2008-11-12 Thread julian julian
Ok, I'm using thunderbird and set the proxy manually, but when I try to conect 
I get an error, should I make some special config in squid?


--- On Wed, 11/12/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] IMAP support
 To: [EMAIL PROTECTED]
 Cc: squid squid-users@squid-cache.org
 Date: Wednesday, November 12, 2008, 6:02 AM
 julian julian wrote:
  Hello, how can I access to IMAP (gmail IMAP for
 example)servers trough squid, I just add imap ports in
 squid.conf as Safe_port and SSL_port, but it does not work. 
 
 IMAP protocol is not HTTP protocol. Squid cannot handle
 IMAP requests.
 
 They only way to do this is configure all the mail clients
 to use a HTTP proxy and wrap the requests into HTTP first.
 
 Amos
 -- Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2


  


Re: [squid-users] IMAP support

2008-11-12 Thread julian julian
Because all my traffic to internet is managed by squid. Do you have any 
suggestion?


--- On Wed, 11/12/08, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:



 From: Matus UHLAR - fantomas [EMAIL PROTECTED]
 Subject: Re: [squid-users] IMAP support
 To: squid-users@squid-cache.org
 Date: Wednesday, November 12, 2008, 6:13 AM
 On 12.11.08 05:57, julian julian wrote:
  Hello, how can I access to IMAP (gmail IMAP for
 example)servers trough
  squid, I just add imap ports in squid.conf as
 Safe_port and SSL_port, but
  it does not work.
 
 why would you want to access IMAP through squid ?
 
 -- 
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ;
 http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this
 address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek
 reklamnu postu.
 Boost your system's speed by 500% - DEL
 C:\WINDOWS\*.*


  


Re: [squid-users] IMAP support

2008-11-12 Thread julian julian
Ok, I'm using thunderbird and set the proxy manually, but when I try to conect 
I get an error, should I make some special config in squid?


--- On Wed, 11/12/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] IMAP support
 To: [EMAIL PROTECTED]
 Cc: squid squid-users@squid-cache.org
 Date: Wednesday, November 12, 2008, 6:02 AM
 julian julian wrote:
  Hello, how can I access to IMAP (gmail IMAP for
 example)servers trough squid, I just add imap ports in
 squid.conf as Safe_port and SSL_port, but it does not work. 
 
 IMAP protocol is not HTTP protocol. Squid cannot handle
 IMAP requests.
 
 They only way to do this is configure all the mail clients
 to use a HTTP proxy and wrap the requests into HTTP first.
 
 Amos
 -- Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2


  


Re: [squid-users] IMAP support

2008-11-12 Thread julian julian
Because all my traffic to internet is managed by squid. Do you have any 
suggestion?


--- On Wed, 11/12/08, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:

 From: Matus UHLAR - fantomas [EMAIL PROTECTED]
 Subject: Re: [squid-users] IMAP support
 To: squid-users@squid-cache.org
 Date: Wednesday, November 12, 2008, 6:13 AM
 On 12.11.08 05:57, julian julian wrote:
  Hello, how can I access to IMAP (gmail IMAP for
 example)servers trough
  squid, I just add imap ports in squid.conf as
 Safe_port and SSL_port, but
  it does not work.
 
 why would you want to access IMAP through squid ?
 
 -- 
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ;
 http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this
 address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek
 reklamnu postu.
 Boost your system's speed by 500% - DEL
 C:\WINDOWS\*.*


  


Re: [squid-users] Transparent LDAP authentication

2008-03-31 Thread julian julian
I use this config and works ok in producion.
Scenario:
AD Win2k3R2
CentOS: 4.4 and 5.1 
SMB and winbind: 3.0.10 and 3.0.25b
Squid 2.5.STABLE14 AND 2.6STABLE6
Using NTLM authentication



#Define uthentications parameters
#auth_param digest nonce_max_count 50
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10
auth_param ntlm max_challenge_reuses 2
auth_param ntlm max_challenge_lifetime 2 minutes
auth_param ntlm use_ntlm_negotiate off
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off


# ACCESS CONTROLS

# define una acl para manejar los grupos de windows
# define acls for user groups manage and indcate whith
helper use (this is provided for SAMBA suite)

external_acl_type nt_group ttl=2 children=10 %LOGIN
/usr/lib/squid/wbinfo_group.pl


# define acls 
acl Sistemas external nt_group sistemas_ # acl para el
manejo del grupos Sistemas
acl InetAccessControl external nt_group
internet_control  # acl para el manejo de grupo
internet_control
acl InetAccessFull external nt_group internet_full
#acl para el manejo de grupo internet_full
acl Autenticados proxy_auth REQUIRED # fuerza el
pedido de autenticacion


# get access using before defined acls

http_access deny urlDenegadas !Sistemas
http_access allow novalida  
http_access allow urlAuditoria auditoria 
http_access allow Sistemas
http_access allow urlPermitidas InetAccessControl
http_access allow InetAccessFull
http_access deny !Autenticados
http_access deny all









--- Philip Kloppers [EMAIL PROTECTED]
wrote:

 I have an OpenSuse 10.2 box that runs Samba /
 OpenLDAP as a PDC, as well as
 Squid with delay pools to limit bandwidth dependant
 upon user, group, time
 of day and machine. I have managed to get everything
 working and
 authenticating correctly using smb_ldap_auth and
 smb_ldap_group. However, I
 would like to get the clients to authenticate
 transparently using the domain
 credentials from the initial domain logon, and not
 having to re-authenticate
 every time they open the browser.
 
 The clients (mostly XP with a few FreeNX terminals
 on various Linux
 flavours) are all set up to use the proxy, and then
 iptables rules blocking
 users from bypassing the proxy, so I am not
 transparently intercepting web
 traffic, as I understand that authentication cannot
 be used with a
 transparent proxy.
 
 Is single sign-on a possibility without using an M$
 PDC? All the searching
 seems to point to using ntlm_auth for this sort of
 thing.
 
 Philip
 
 PS: I have tried using ntlm_auth to authenticate
 against the Samba server...
 the users are able to authenticate correctly, but
 still need to re-enter
 their credentials every time they open their
 browsers.
 
 



  

Special deal for Yahoo! users  friends - No Cost. Get a month of Blockbuster 
Total Access now 
http://tc.deals.yahoo.com/tc/blockbuster/text3.com


Re: [squid-users] Slow internet

2008-03-31 Thread julian julian
I really never used squid on Windows plataform, but i
think it culd be run as good as in linux enviromet.
What user validation method are you implemented? a
missconfig squid.conf could be a cause. Check your
cache.log for validation erros.





--- Jeremy Kim [EMAIL PROTECTED] wrote:

 Hello,
 
 Using the squid proxy is really slow. Is there
 anyway to make it faster?
 
 
 I have squid version Squid2.6STABLE18 on a XP.
 
 
 The machine I am running this on is a Dell PowerEdge
 SC 1420 Xeon CPU
 2.80GHz and 4GB if ram. I am using three scsi
 drives.  One to host the
 squid and operating system (est.75G Drive) and the
 other two drives
 (est. each 210G drives) as cache drives.
 
 
 Here are my configurations. Everything else is
 pretty much set on
 default.
 
 #Default:
 # http_access deny all
 #Recommended minimum configuration:
 # Only allow cachemgr access from localhost
 http_access allow biblical
 http_access allow manager localhost
 http_access deny manager
 # Deny requests to unknown ports
 http_access deny !Safe_ports
 # Deny CONNECT to other than SSL ports
 http_access deny CONNECT !SSL_ports
 
 
 # And finally deny all other access to this proxy
 http_access deny all
 
 # MEMORY CACHE OPTIONS
 #Default:
 cache_mem 32 MB
 
 #Default:
 maximum_object_size_in_memory 8 KB
 
 #Default:
 memory_replacement_policy lru
 
 #Default:
 cache_replacement_policy heap GDSF
 
 #Default:
 cache_dir aufs e:/cache 512 16 256
 cache_dir aufs f:/cache 512 16 256
 
 #Default:
 store_dir_select_algorithm least-load
 
 #Default:
 # max_open_disk_fds 0
 
 #Default:
 # minimum_object_size 0 KB
   
 #Default:
 # maximum_object_size 4096 KB
 
 #Default:
 cache_swap_low 90
 cache_swap_high 95
 
 #Default:
 # maximum_object_size 4096 KB
 
 #Default:
 cache_swap_low 90
 cache_swap_high 95
 
 
 



  

Special deal for Yahoo! users  friends - No Cost. Get a month of Blockbuster 
Total Access now 
http://tc.deals.yahoo.com/tc/blockbuster/text3.com


Re: [squid-users] Slow internet

2008-04-03 Thread julian julian
I use CentOS 4.4 and 5.1. But I recomend to you CentOS
4.4 because is a bit more easy for configure due to
the squid and samba versions included in this distro.
Be careful if you are using some intel motherboard and
how Linux see the hard disk. If there are SATA you
should see that as sdx(x=a b or c) instead hdx, it
could affect the read/write performance.


--- Jeremy Kim [EMAIL PROTECTED] wrote:

 I actually started with squid on a suse 10.2 but it
 had problems of its
 own.  It would be working fine but once in a while
 it would have these
 long delays.  I think it was when cache was being
 swapped. 
 
 Which linux version do you use for your linux?
 
 Also I am not running any user authentication right
 now on my XP squid.
 
 
 On Mon, 2008-03-31 at 10:56 -0700, julian julian
 wrote:
  I really never used squid on Windows plataform,
 but i
  think it culd be run as good as in linux
 enviromet.
  What user validation method are you implemented? a
  missconfig squid.conf could be a cause. Check your
  cache.log for validation erros.
  
  
  
  
  
  --- Jeremy Kim [EMAIL PROTECTED] wrote:
  
   Hello,
   
   Using the squid proxy is really slow. Is there
   anyway to make it faster?
   
   
   I have squid version Squid2.6STABLE18 on a XP.
   
   
   The machine I am running this on is a Dell
 PowerEdge
   SC 1420 Xeon CPU
   2.80GHz and 4GB if ram. I am using three scsi
   drives.  One to host the
   squid and operating system (est.75G Drive) and
 the
   other two drives
   (est. each 210G drives) as cache drives.
   
   
   Here are my configurations. Everything else is
   pretty much set on
   default.
   
   #Default:
   # http_access deny all
   #Recommended minimum configuration:
   # Only allow cachemgr access from localhost
   http_access allow biblical
   http_access allow manager localhost
   http_access deny manager
   # Deny requests to unknown ports
   http_access deny !Safe_ports
   # Deny CONNECT to other than SSL ports
   http_access deny CONNECT !SSL_ports
   
   
   # And finally deny all other access to this
 proxy
   http_access deny all
   
   # MEMORY CACHE OPTIONS
   #Default:
   cache_mem 32 MB
   
   #Default:
   maximum_object_size_in_memory 8 KB
   
   #Default:
   memory_replacement_policy lru
   
   #Default:
   cache_replacement_policy heap GDSF
   
   #Default:
   cache_dir aufs e:/cache 512 16 256
   cache_dir aufs f:/cache 512 16 256
   
   #Default:
   store_dir_select_algorithm least-load
   
   #Default:
   # max_open_disk_fds 0
   
   #Default:
   # minimum_object_size 0 KB
 
   #Default:
   # maximum_object_size 4096 KB
   
   #Default:
   cache_swap_low 90
   cache_swap_high 95
   
   #Default:
   # maximum_object_size 4096 KB
   
   #Default:
   cache_swap_low 90
   cache_swap_high 95
   
   
   
  
  
  
   


  Special deal for Yahoo! users  friends - No Cost.
 Get a month of Blockbuster Total Access now 
  http://tc.deals.yahoo.com/tc/blockbuster/text3.com
 



  

You rock. That's why Blockbuster's offering you one month of Blockbuster Total 
Access, No Cost.  
http://tc.deals.yahoo.com/tc/blockbuster/text5.com


Re: [squid-users] DNS lookup errors when using squid

2008-04-03 Thread julian julian
I never use squid on windows but I've use it a lot on
Linux. You can set the DNS server in two ways: a)
leaving commented the DNS setting in the squid.conf
and it will use the operating system DNS setting. b)
Setting DNS configuration into squid.conf file. The
second one works well in Linux.

Julián


--- Oliver Marshall [EMAIL PROTECTED]
wrote:

 We run a very basic squid setup on a windows 2003
 box here. It's purely
 used to route all web traffic over a cheap adsl line
 leaving the leased
 line free for important stuff. The 'squid server'
 (win2k3) has the
 gateway IP set to that of the adsl router meaning
 that when someone sets
 the proxy in their browser all the web traffic will
 go over the adsl
 line. If for some reason the adsl goes down, we just
 change the gateway
 address to the leased line router and the web comes
 back. It's a basic
 but workable solution.
 
 However.
 
 We have been having a lot of problems with people
 getting DNS lookup
 failures when using the squid proxy. If we remove
 the proxy setting, the
 web works for the users, put them on to squid again
 and with an hour or
 so DNS failures appear. Hitting F5 a few times
 solves it and they carry
 on until the next time, but it only happens with
 Squid. I've tried
 swapping from the ADSL to the SDSL but it still
 happens if squid is
 used. I've tried another Windows proxy, and the
 problem doesn't happen,
 nor does it happen without Squid involved at all.
 
 Any ideas why this might be? I know that squid
 doesn't have a DNS
 element as such, using the DNS settings and servers
 specified on the
 underlying OS, but the fact that it only happens
 with Squid involved
 does make me think something is wrong somewhere.
 
 Olly
 



  

You rock. That's why Blockbuster's offering you one month of Blockbuster Total 
Access, No Cost.  
http://tc.deals.yahoo.com/tc/blockbuster/text5.com


Re: [squid-users] FTP access via proxy server

2008-04-03 Thread julian julian
Squid have some limitations for manage ftp traffic. If
you whant use Web browser for non-anonymous ftp
access, you will find many troubles. Instead that I
recomend use a real FTP clients (cuteFTP, FTPzilla,
etc), combined with a FTP proxy (frox
http://frox.sourceforge.net/) it works very well. I
use those in a several production servers.


--- R Pradeepa [EMAIL PROTECTED] wrote:

 
 when i try to access an ftp site i get the following
 error
 
 The folder is readonly because the proxy server is
 not set up to allow 
 full access. To move, paste, rename or delete files
 you should use a 
 different proxy server.
 
 Without proxy server i am able to view the ftp sites
 or add any files to 
 that ftp site in IE 6 without any issues.
 
 Please suggest me what ashould be enabled in squid
 proxy to allow write
 permission to that ftp site
 
 
 Thanks  Regards
 Pradeepa
 Sys.Administrator
 
 
 
 



  

You rock. That's why Blockbuster's offering you one month of Blockbuster Total 
Access, No Cost.  
http://tc.deals.yahoo.com/tc/blockbuster/text5.com


Re: [squid-users] Dub with access.log rotation ...

2008-04-04 Thread julian julian
Squid, as other linux software use a daemon knows as
logrotate who is the responsable of many log
rotations.
You should modify the logrotate.conf or
./logrotate.d/squid specify logrotate config for
squid.

--- Ramiro Sabastta [EMAIL PROTECTED] wrote:

 Hi,
 
 I installed squid on a Debian box.
 
 Everithing is working well, but I have a issue that
 I can't solve.
 
 The access.log log file always close an open new one
 at 6 A.M.
 
 I try to change that with the squid -k rotate
 option, including in the
 crontab file this line:
 
 0 0 * * * /usr/sbin/squid -k rotate
 
 but this configuration close the file at 00 AM and 6
 AM too.
 
 ¿What can i do to force the rotation only to 0 AM?
 (but not to 6 AM)
 
 Thanks a lot ...
 
 Kind regards !!
 
 Ramiro
 



  

You rock. That's why Blockbuster's offering you one month of Blockbuster Total 
Access, No Cost.  
http://tc.deals.yahoo.com/tc/blockbuster/text5.com


Re: [squid-users] ICAP: fake user and new icap header X-Authenticated-Groups

2008-04-04 Thread julian julian
I´ve experience with Windows 2003 ADS (also Windows NT
domain) and Squid 2.5/2.6. I read windows group and
manage it with several ACLs. It works without problem.

Enviroment:
SO: CentOS 4.4/5.1
Samba: 3.0xx
Squid: 2.5/2.6

By the way, I've been suffering BC for many years and
I hate it.

--- Arno _ [EMAIL PROTECTED] wrote:

 
 Hello,
 I'm my configuration I have 2 bluecoat proxy talking
 to a webwasher via ICA=
 P.
 And I also have a squid 3.0 for my test and as a
 backup of the bluecoat.
 But my squid is not doing any authentication, I
 can't and don't want to.
 
 So to be able to make it work with the ICAP
 (webwasher) I need to send user=
 name and user group to it.
 
 Any actual way of sending a fake information, or
 should I crate a new icap-=
 fake-client-username and icap-fake-client-group on
 the icap config part of =
 squid.conf ?
 Anyone interested or it will be just for me ?
 
 That will let me (you, anyone) have a mix
 environment with authenticated pr=
 oxy and some other not ((can be for automated system
 or whatever you want)
 
 regards,
 
 arno
 

_
 Connect to the next generation of MSN Messenger 

http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-ussource=wlmailtagline



  

You rock. That's why Blockbuster's offering you one month of Blockbuster Total 
Access, No Cost.  
http://tc.deals.yahoo.com/tc/blockbuster/text5.com


Re: [squid-users] stop anonymous browsing

2008-04-10 Thread julian julian
You should choose one of the several validation option
than offers by Squid (LDAP,ADS,SAMBA,NTLM local users,
etc.)
If you have many proxys I suggest try to integrate
them with your actual user validation repository

--- Anil Saini [EMAIL PROTECTED] wrote:

 
 
 how to stop anonymous browsing
 
 we have huge collection of  web-proxies to bybass
 acl blocked list
 Is thr any sol to block them all without making list
 of them.
 
 -- 
 View this message in context:

http://www.nabble.com/stop-anonymous-browsing-tp16603009p16603009.html
 Sent from the Squid - Users mailing list archive at
 Nabble.com.
 
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


RE: [squid-users] client ip's

2008-04-10 Thread julian julian
Jorge: have you set the network properly? Are you
using 192.168.x.x net. The network parameter must be
wrote in 
../ifcfg-eth0 and ../ifcfg-eth1 file (because I
suspect that you have two nics). The route command
shows some aspect of your network configuration.

Julián

--- Jorge Bastos [EMAIL PROTECTED] wrote:

 Hum I got some news on this,
 
 I don't know why my system started to give me this
 information:
 
 Kernel IP routing table
 Destination Gateway Genmask
 Flags Metric RefUse Iface
 192.168.1.0 *   255.255.255.0   U   
  0  00 eth0
 192.168.0.0 *   255.255.255.0   U   
  0  00 eth1
 default localhost   0.0.0.0 UG  
  0  00 eth1
 
 Kernel IP routing table
 Destination Gateway Genmask
 Flags Metric RefUse Iface
 192.168.1.0 0.0.0.0 255.255.255.0   U   
  0  00 eth0
 192.168.0.0 0.0.0.0 255.255.255.0   U   
  0  00 eth1
 0.0.0.0 192.168.0.254   0.0.0.0 UG  
  0  00 eth1
 
 
 The fact is that the hosts file is correct:
 
 cisne:~# cat /etc/hosts
 127.0.0.1   localhost
 
 I only have this there
 
 I know this is not squid related but if you guys can
 give me a hand.
 I have no idea why is it resolving 192.168.0.254 to
 localhost.
 
 
 
 
 
  -Original Message-
  From: Jorge Bastos [mailto:[EMAIL PROTECTED]
  Sent: sábado, 5 de Abril de 2008 21:23
  To: 'Henrik Nordstrom'
  Cc: 'Amos Jeffries'; squid-users@squid-cache.org
  Subject: RE: [squid-users] client ip's
  
  This already worked with some of the 3.0 versions.
  Gonna try to play with my iptables rules and let
 you guys know.
  
  
  
  
   -Original Message-
   From: Henrik Nordstrom
 [mailto:[EMAIL PROTECTED]
   Sent: sábado, 5 de Abril de 2008 19:38
   To: Jorge Bastos
   Cc: 'Amos Jeffries'; squid-users@squid-cache.org
   Subject: RE: [squid-users] client ip's
  
   lr 2008-04-05 klockan 14:24 +0100 skrev Jorge
 Bastos:
  
I updated to last STABLE-4 on debian, but this
 still happens this
   way.
What can I do more?
  
   Good question.
  
   One thing you can try is to downgrade to
 Squid-2.6. If that shows the
   same symptoms the problem is not within Squid
 but most likely in your
   firewall ruleset or something else relevant to
 how the connections
  end
   up at your Squid.
  
   Regards
   Henrik
  
 
 
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


RE: [squid-users] client ip's

2008-04-10 Thread julian julian
The reference to 192.168.0.254 which are you looking
when run route command is the default gateway.Your
server is not resolving 192.168.0.254 to localhost.

 
--- Jorge Bastos [EMAIL PROTECTED] wrote:

 In fact I have 3 NIC's.
 
 Yes, the two interfaces I showed in the route print,
 are defined in
 /etc/network/interfaces.
 
 
 
 
  -Original Message-
  From: julian julian [mailto:[EMAIL PROTECTED]
  Sent: quinta-feira, 10 de Abril de 2008 15:47
  To: Jorge Bastos
  Cc: squid
  Subject: RE: [squid-users] client ip's
  
  Jorge: have you set the network properly? Are you
  using 192.168.x.x net. The network parameter must
 be
  wrote in
  ../ifcfg-eth0 and ../ifcfg-eth1 file (because I
  suspect that you have two nics). The route command
  shows some aspect of your network configuration.
  
  Julián
  
  --- Jorge Bastos [EMAIL PROTECTED] wrote:
  
   Hum I got some news on this,
  
   I don't know why my system started to give me
 this
   information:
  
   Kernel IP routing table
   Destination Gateway Genmask
   Flags Metric RefUse Iface
   192.168.1.0 *   255.255.255.0  
 U
0  00 eth0
   192.168.0.0 *   255.255.255.0  
 U
0  00 eth1
   default localhost   0.0.0.0
 UG
0  00 eth1
  
   Kernel IP routing table
   Destination Gateway Genmask
   Flags Metric RefUse Iface
   192.168.1.0 0.0.0.0 255.255.255.0  
 U
0  00 eth0
   192.168.0.0 0.0.0.0 255.255.255.0  
 U
0  00 eth1
   0.0.0.0 192.168.0.254   0.0.0.0
 UG
0  00 eth1
  
  
   The fact is that the hosts file is correct:
  
   cisne:~# cat /etc/hosts
   127.0.0.1   localhost
  
   I only have this there
  
   I know this is not squid related but if you guys
 can
   give me a hand.
   I have no idea why is it resolving 192.168.0.254
 to
   localhost.
  
  
  
  
  
-Original Message-
From: Jorge Bastos
 [mailto:[EMAIL PROTECTED]
Sent: sábado, 5 de Abril de 2008 21:23
To: 'Henrik Nordstrom'
Cc: 'Amos Jeffries';
 squid-users@squid-cache.org
Subject: RE: [squid-users] client ip's
   
This already worked with some of the 3.0
 versions.
Gonna try to play with my iptables rules and
 let
   you guys know.
   
   
   
   
 -Original Message-
 From: Henrik Nordstrom
   [mailto:[EMAIL PROTECTED]
 Sent: sábado, 5 de Abril de 2008 19:38
 To: Jorge Bastos
 Cc: 'Amos Jeffries';
 squid-users@squid-cache.org
 Subject: RE: [squid-users] client ip's

 lr 2008-04-05 klockan 14:24 +0100 skrev
 Jorge
   Bastos:

  I updated to last STABLE-4 on debian, but
 this
   still happens this
 way.
  What can I do more?

 Good question.

 One thing you can try is to downgrade to
   Squid-2.6. If that shows the
 same symptoms the problem is not within
 Squid
   but most likely in your
 firewall ruleset or something else relevant
 to
   how the connections
end
 up at your Squid.

 Regards
 Henrik
   
  
  
  
  
  
  __
  Do You Yahoo!?
  Tired of spam?  Yahoo! Mail has the best spam
 protection around
  http://mail.yahoo.com
 
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] Searching squid logs for pornographic sites

2008-06-11 Thread julian julian

I suggest to use a log analizer like webalizer o sarg, this is a bit more 
complete for user behavior analisys.

Julián

--- On Wed, 6/11/08, Steven Engebretson [EMAIL PROTECTED] wrote:

 From: Steven Engebretson [EMAIL PROTECTED]
 Subject: [squid-users] Searching squid logs for pornographic sites
 To: squid-users@squid-cache.org
 Date: Wednesday, June 11, 2008, 11:32 AM
 I am looking for a tool that will scan the access.log file
 for pornographic sites, and will report the specifics back.
  We do not block access to any Internet sites, but need to
 monitor for objectionable content.
 
 What I am doing now is just greping for some key words, and
 dumping the output into a file.  I am manually going through
 about 60,000 lines of log file, following my grep.  99% of
 these are false.  Any help would be appreciated.
 
 Thank you all.
 
 
 -Steven E.





Re: [squid-users] Squid with two networks ...

2008-06-23 Thread julian julian
You could probably use a set of static routes made by route command, where 
you can specify static gateway for each network. Defining as gateway each of 
yours public IP.






--- On Mon, 6/23/08, Ramiro Sabastta [EMAIL PROTECTED] wrote:

 From: Ramiro Sabastta [EMAIL PROTECTED]
 Subject: [squid-users] Squid with two networks ...
 To: squid-users@squid-cache.org
 Date: Monday, June 23, 2008, 8:20 AM
 Hi !!!
 
 I've installed a Squid box transparent mode (3STABLE7)
 with two
 network cards and I must to implement this scenario:
 
 - The network cards are connected at two diferents internal
 Class C
 network with public IP.
 - If the http requeriment asking about an object that is in
 the cache,
 the Squid give the object throught the same interface that
 the
 original requermients comes (I tink that this is not a
 problem,
 because de origin IP is in the same network that the squid
 have).
 - If the http requeriment asking about an object that
 isn't in the
 cache, the Squid go direct to public network trought the
 same
 interface that the original requermients comes (This is the
 problem).
 
 Are some configurations of squid.conf that allow me to do
 that?
 
 I think resolve te problem, externally form Squid (with
 iptables, for example)
 
 Thanks a lot !!!
 
 Regards !!
 
 Ramiro


  


[squid-users] How to Block MSN over HTTP

2007-04-13 Thread julian julian
I want to deny access to IM services by web pages like
www.meebo.com. I can block these domains but im
looking for a bit more general solution. Any
suggeestion? Thanks

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] failed to start or stop squid

2007-11-07 Thread julian julian
After check the cache.log and your squid.conf, and if
its looks ok, delete these two files:
/var/spool/squid/swap.state and swap.??clean (do not
remember the exact name of the second file, but start
whit swap)

saludos

Julián



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] failed to start or stop squid

2007-11-07 Thread julian julian
After check the cache.log and your squid.conf, and if
its looks ok, delete these two files:
/var/spool/squid/swap.state and swap.??clean (do not
remember the exact name of the second file, but start
whit swap)

saludos

Julián



--- Angela Williams [EMAIL PROTECTED] wrote:

 Hi!
 On Wednesday 07 November 2007, Chezhira Alice wrote:
  my squid service is failing to  start or stop i've
 tried to rebuild the
  squid cache but still its failing
 
 A look at your cache.log
 Post the relevant bits!
 Without your input we are dead in the water!
 
 Bit phuny though that it both fails to start and
 stop! Could it be a zombie?
 
 
 
 Cheers
 
 Ang
 
 
 -- 
 Angela Williams   Enterprise Outsourcing
 Unix/Linux  Cisco spoken here!   Bedfordview
 [EMAIL PROTECTED] Gauteng South Africa
 
 Smile!! Jesus Loves You!!
 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] cuteftp throught squid

2007-11-21 Thread julian julian
Hello, I need to connec cuteftp client through squid,
any suggestion?

thanks


  

Get easy, one-click access to your favorites. 
Make Yahoo! your homepage.
http://www.yahoo.com/r/hs 


[squid-users] Squid and HTTP Host value

2008-07-03 Thread Julian Gilbert
I am trying to configure squid 2.5 and looking for some assistance. When I 
make client request to squid in the form:


GET http://66.102.9.147/
HOST www.google.co.uk

the squid proxy makes the following request to the web server:

GET /
HOST 66.102.9.147

How do I configure squid not to overwire the host value? The request from 
squid should be sent as:


GET /
HOST www.google.co.uk

Many Thanks,

Julian Gilbert 





Re: [squid-users] Squid and HTTP Host value

2008-07-04 Thread Julian Gilbert

Thanks for your responses.

What security problem does rewriting the host value prevent? I'm not sure 
what domain hijacking is. At work I currently use ISA Server 2004 and when 
it recieves:


GET http://66.102.9.147/
HOST www.google.co.uk

it connects to 66.102.9.147 and sends:

GET /
HOST www.google.co.uk

Is this a security risk? The RFCs state that a web server MUST use 
http://66.102.9.147/ and ignore www.google.co.uk but as far as I can see a 
proxy is not required to ignore www.google.co.uk.


Regards,
Julian

- Original Message - 
From: Amos Jeffries [EMAIL PROTECTED]

To: Julian Gilbert [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Sent: Friday, July 04, 2008 12:47 AM
Subject: Re: [squid-users] Squid and HTTP Host value



Julian Gilbert wrote:

I am trying to configure squid 2.5 and looking for some assistance.


The first assistance we can give is upgrade to 3.0 or 2.7.
2.5 is well and truly obsolete now.


When I make client request to squid in the form:

GET http://66.102.9.147/
HOST www.google.co.uk

the squid proxy makes the following request to the web server:

GET /
HOST 66.102.9.147

How do I configure squid not to overwire the host value? The request from 
squid should be sent as:


GET /
HOST www.google.co.uk


The client asked for http://66.102.9.147/, nothing to do with google as 
far as HTTP is concerned. It's a security feature to prevent domain 
hijacking.


Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7






[squid-users] Squid for windows authentication against Active Directory

2011-05-25 Thread Julian Zoellner
hello all,

in the last days i tried to setup the Squid for windows 2.7.STABLE7 with 
authentication against a Active Directory Group Internet. For this is used 
the folloing HowTo:
http://www.papercut.com/kb/Main/InstallingAndConfiguringSquidNTProxy

So my squid.conf looks like this:

http_port 3128
external_acl_type win_domain_group ttl=120 %LOGIN 
c:/squid/libexec/mswin_check_ad_group.exe -d -G
acl Inet external win_domain_group MY-DOMAIN/Groups/Internet
 
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localnet src 10.0.0.0/13
acl SSL_ports port 443 563 1
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl CONNECT method CONNECT

http_access allow manager localnet
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow Inet

http_access deny all
never_direct allow all
icp_access allow all


After starting up the squid service i get the following reply from my helper:
/mswin_check_ad_group.exe[3692]: Member of Domain MY-DOMAIN
/mswin_check_ad_group.exe[3692]: Into forest MY.DOMAIN
/mswin_check_ad_group.exe[3692]: External ACL win32 group helper build Mar 13 
2010, 14:16:45 starting up...
/mswin_check_ad_group.exe[3692]: Domain Global group mode enabled using 
'MY-DOMAIN' as default domain.

the last entry in my cache.log is:
2011/05/25 08:03:13| storeLateRelease: released 0 objects

when i try to connect i always get Cache Access Denied errorpage.

can please someone help me setting this up?

best regards 
Julian
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


[squid-users] Squid for windows checking for PC in Active Directory

2011-05-31 Thread Julian Zoellner
Hello,

Squid for Windows 2.7.STABLE8 is running and user authentification is running 
aswell. But some computer should not be allowed to have internet even with 
authenticated users logged in.

is it possible to test if a computer (with random IP-adress) is member of an 
Active Directory Group? 

Tahnks for help
Julian
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


[squid-users] Using of Squid 3.0 in production environments

2005-04-05 Thread Pawlowski Julian
Hi everybody!

I'd like to know if there are some of you already working with Squid 3.0 in 
their production environments.
I just tried to find out if there are any recommendations but I didn't find 
anything...

I read the Squid 3.0 release notes of April 20th 2003 and I think there are 
some interesting changes in difference to the 2.5 in it.

Although the first pre-release of Squid 3 has been made many years ago, there 
isn't at least one first stable release and I woundered why. Are the testing 
purposes so hard or what exactly is the cause of this?

Thank you.


Best regards from Munich

Julian Pawlowski

-- 
Virtual-Planet Group GmbH
 
Julian Pawlowski
Systemingenieur Broadcast  IT
Domagkstraße 34 · 80807 München
Tel. +49 89 41200 -7245 · Fax +49 89 41200 -7120

mailto:[EMAIL PROTECTED]
http://www.vp-tecmedia.de

This email (and any attachments) is confidential and may be privileged. It is 
intended solely for the addressee. If you have received it in error, please 
contact us immediately. Any disclosure, copying, distribution or action taken 
as a result of this email is prohibited and may be unlawful. The statements and 
opinions expressed in this message are those of the author and do not 
necessarily reflect those of Virtual Planet ® Group GmbH. Any opinions or 
advice contained in this email are for the addressee only.
 
Please note, any offer made by a Virtual Planet ® Group GmbH employee or 
acceptance of an offer made to Virtual Planet ® Group GmbH by a Virtual Planet 
® Group GmbH employee or agent, (whether received by the intended recipient 
directly or indirectly) shall only be legally binding upon Virtual Planet ® 
Group GmbH once a written contract in a non-electronic form has been signed by 
authorized representatives of Virtual Planet ® Group GmbH and the intended 
recipient. 


RE: [squid-users] Using of Squid 3.0 in production environments

2005-04-05 Thread Pawlowski Julian
Hello Marc,

Thank you for your answer.

   http://www.squid-cache.org/mail-archive/squid-users/200504/0087.html

I already know that you normaly do not take pre-releases in production
environments.

Your answer doesn't really reply to my question as there were the
indirect question when there may be a first stable release of Squid 3.0
available. I'm just asking because this really is the first time I saw
pre-releases in an opensource project but after years of it sill no
stable release. I think the way from pre to stable normaly takes only a
few month :-)
If not, the pre-releases aren't exactly those but more development
stuff.


Regards
Julian Pawlowski


RE: [squid-users] Which method for sibling peers?

2005-04-05 Thread Pawlowski Julian
Askar wrote:
 http://www.mail-archive.com/squid-users@squid-cache.org/msg27484.html

Ah, okay! That example is nearly the same as mine. I would use HTCP if
someone would say it is good enough to be used with Squid. But I think I
will take the classic ICP to be sure everything works.

Maybe some day when Squid 3 stable has been released, I guess about a
change... :-)


Regards
Julian


RE: [squid-users] Which method for sibling peers?

2005-04-07 Thread Pawlowski Julian
Askar wrote:
 cache_peer some_cache_net   sibling 3128 3130 proxy-only

Thanks, I already use this kind of config :-)


[squid-users] Bypassing proxy authentication

2010-03-10 Thread Julian Pilfold-Bagwell

Hi all,

i have a squid proxy server (v2.6.STABLE21-3.el5) running on CentOS 
5.4.  It's set up for NTLM authentication for use with Windows XP and it 
works perfectly.However, I have a piece of software that needs to 
contact a stats site and I've tried running proxycfg - p 172.20.0.5:8002 
and have also added an acl called stats and set an http_access allow 
rule for the acl but it still doesn't contact the site.  Ohter sites 
that we've had problems with have identical ACLs and rules and these 
work so I know the syntax is correct.


The software manufacturers only know Windows proxy servers and don't 
seem to be able to help much.  Is there any way I can bypass the 
authentication or is this implied by the access rule.


Thanks,

Julian PB


[squid-users] [SOLVED] [squid-users] Bypassing proxy authentication

2010-03-10 Thread Julian Pilfold-Bagwell

Julian Pilfold-Bagwell wrote:

Amos Jeffries wrote:

Julian Pilfold-Bagwell wrote:

Hi all,

i have a squid proxy server (v2.6.STABLE21-3.el5) running on CentOS 
5.4.  It's set up for NTLM authentication for use with Windows XP 
and it works perfectly.However, I have a piece of software that 
needs to contact a stats site and I've tried running proxycfg - p 
172.20.0.5:8002 and have also added an acl called stats and set an 
http_access allow rule for the acl but it still doesn't contact the 
site.  Ohter sites that we've had problems with have identical ACLs 
and rules and these work so I know the syntax is correct.


The software manufacturers only know Windows proxy servers and don't 
seem to be able to help much.  Is there any way I can bypass the 
authentication or is this implied by the access rule.




Sounds like you are almost there.

Any http_access lines above the first line which tests for auth will 
permit/deny access without needing auth themselves.


Amos

Stunning service :)
Thanks very much Amos, much appreciated, all up and running as required.

All the best,

Julian





[squid-users] NTLM Auth problem

2011-02-23 Thread Julian Pilfold-Bagwell

Hi All,

I have a problem with NTLM authentication on squid-2.6.STABLE21-6.el5 on 
CentOS 5.5.


If I run /usr/bin/ntml_auth --username=jpb --domain=BGS, it returns 
success.  Samba (v3.5.6) file sharing works as does winbind's wbinfo -, 
wbinfo -g, wbinfo -t so I'm fairly sure that both Samba and winbind are 
functioning OK.


If I go to a client and try to visit a website, I get the pop up 
credentials box but entering the same credentials as on the ntlm_auth 
line above generates the following with the virtual XP being a VM and 
the jpb-workstation being a Linux box:


[2011/02/23 22:49:05.671790,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0xa2088207
[2011/02/23 22:49:05.674159,  3] libsmb/ntlmssp.c:747(ntlmssp_server_auth)
  Got user=[bgs0001] domain=[BGS] workstation=[VIRTUAL-XP] len1=24 len2=24
[2011/02/23 22:49:05.675008,  3] utils/ntlm_auth.c:598(winbind_pw_check)
  Login for user [BGS]\[bgs0001]@[VIRTUAL-XP] failed due to [Invalid 
handle]



[2011/02/23 23:03:24.838232,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0x00088207
[2011/02/23 23:03:24.845152,  3] libsmb/ntlmssp.c:747(ntlmssp_server_auth)
  Got user=[jpb] domain=[] workstation=[jpb-desktop] len1=24 len2=24
[2011/02/23 23:03:24.845972,  3] utils/ntlm_auth.c:598(winbind_pw_check)
  Login for user []\[jpb]@[jpb-desktop] failed due to [Invalid handle]
[2011/02/23 23:03:40.780692,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0x00088207
[2011/02/23 23:03:40.782125,  3] libsmb/ntlmssp.c:747(ntlmssp_server_auth)
  Got user=[jpb] domain=[bgs] workstation=[jpb-desktop] len1=24 len2=24
[2011/02/23 23:03:40.782938,  3] utils/ntlm_auth.c:598(winbind_pw_check)
  Login for user [bgs]\[jpb]@[jpb-desktop] failed due to [Invalid handle]
[2011/02/23 23:05:13.260874,  3] libsmb/ntlmssp.c:65(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0x00088207
[2011/02/23 23:05:13.262425,  3] libsmb/ntlmssp.c:747(ntlmssp_server_auth)
  Got user=[jpb] domain=[] workstation=[jpb-desktop] len1=24 len2=24
[2011/02/23 23:05:13.263254,  3] utils/ntlm_auth.c:598(winbind_pw_check)
  Login for user []\[jpb]@[jpb-desktop] failed due to [Invalid handle]


Given that using the ntlm_auth command directly succeeds, I'm unsure as 
to whether this a problem with Samba, Squid or the interaction between 
the two.  I've set the permissions on the winbind privileged pipe to 
750, created a group called winbindd_priv and added the squid user to 
that group. There are no messages relating to being unable to read from 
the pipe.


There are other people that have had the same problem but nothing I've 
looked at has solved it yet.  Has anyone else been here?


Thanks.

Julian


[squid-users] Squid slows under load

2011-03-03 Thread Julian Pilfold-Bagwell

Hi All,

I've been having some problems with Squid and Dansguardian for a while 
now and despite lots of time on Google, haven't found a solution.


The problem started a week or so back when I noticed that squid was 
slowing.  A quick look through the logs showed it was running out of 
file descriptors so I upped the level to take account.  The server was 
ancient so I bought in an HP Proliant DL120 (dual Pentium 2.80Ghz G6950 
CPU  4GB of RAM).  At the same time, I bought in 2 x 60GB SSD drives to 
use as cache space with the system on a RAID 1 array with 160GB SATA II 
disks.


On this, I installed Ubuntu server 10.04.2 LTS with Squid 2.7 (from apt) 
and Dansguardian 2.10.1.1. The kernel version is 2.6.32-24-server and 
the server authenticates via a Samba PDC (v 3.5.6) using 
OpenLDAP/Winbind.  The Samba version on the proxy machine is v 3.4.7 as 
supplied from the Ubuntu repo.


This however also seems to run out of steam.  My first thought was that 
it may have been running out of RAM so I ran htop.  Both CPUs were 
topping out at 20% and out of the 4GB of RAM, 1.3GB was used.  Next I 
checked the load on the NIC and found that it was running on average 
400kB/s, with the odd burst at 5MB/s.  As the load increased, web pages 
were taking up to 30-45 seconds to load.  I bypassed Dansguardian and 
went in on 3128 with no change in performance.


Following the recommendations on other sites discovered via Google, I 
tuned and tweaked settings with no real benefit and I can't see that I 
changed anything to cause it to happen. The log files look fine, I have 
1 file descriptors available and cachemgr shows plenty of spares. 
There are 50% more NTLM authenticators than are in use at any given time.


The config file for Squid is shown below.  I have had the number of 
authenticators set to 400 as I have 350 users but the number in use 
still peaked at around 50. If I've been a numpty and done something 
glaringly obvious, I'd be grateful if someone could point it out. If 
not, ask for info and I'll provide it.


Thanks,

Jools


## Squid.conf
## Start with authentication for clients

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm_param children 100
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 100
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

## Access Control Lists for filter bypass ##
acl realtek dstdomain .realtek.com.tw
acl tes dstdomain .tes.co.uk
acl glogster dstdomain .glogster.com
acl adobe-installer dstdomain .adobe.com # allow installs from adobe 
download manager
acl actihealth dstdomain .actihealth.com .actihealth.net # Allow direct 
access for PE dept activity monitors
acl spybotupdates dstdomain .safer-networking.org .spybotupdates.com # 
Allow updates for Spybot SD
acl sims-update dstdomain .kcn.org.uk .capitaes.co.uk .capitasolus.co.uk 
.sims.co.uk # Allow SIMS to update itself directly

acl kcc dstdomain .kenttrustweb.org.uk # Fix problem with county
acl frenchconference dstdomain flashmeeting.e2bn.net
acl emsonline dstdomain .emsonline.kent.gov.uk
acl clamavdstdomain .db.gb.clamav.net
acl ubuntudstdomain .ubuntu.com .warwick.ac.uk
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl windowsupdate dstdomain download.adobe.com
acl comodo dstdomain download.comodo.com
acl simsb2b dstdomain emsonline.kent.gov.uk
acl powerman dstdomain pmstats.org
acl ability dstdomain ability.com
acl fulston dstdomain fulstonmanor.kent.sch.uk
acl httpsproxy dstdomain .retiredsanta.com .atunnel.com .btunnel.com 
.ctunnel.com .dtunnel.com .ztunnel.com .partyaccount.com


## Access Control for filtered users ##
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl ntlm_users proxy_auth REQUIRED

acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl 

[squid-users] Squid and Windows Update

2007-06-21 Thread Julian Pilfold-Bagwell

Hi All,

I have an NTLM authenticated squid proxy and an trying to get to Windows 
Update. Up until about 3 weeks ago it worked OK  but then stopped and I 
haven't been able to get it going since. I have microsoft.com and 
windowsupdate.com in an always_direct acl and have used proxycfg to set 
the proxy up on the windows boxes.  I've also ticked http 1.1 connection 
on proxy in IE6's options. I've spent hours on Google without finding 
any solution. Could someone have a look through the acls below to see if 
I've missed something please.


Cheers,

Jools

PS: Below is a snap from the proxy log showing what's happening when I 
try to connect. Thanks.


# Log Output

1182427844.513 RELEASE -1  62992ED631E0F39DDA8C8DC2F898F266  407 
1182427844 0 1182427844 text/html 1325/1325 GET 
http://go.microsoft.com/fwlink/?
1182427844.520 RELEASE -1  2E6A5C7F93EEE6901CCCEE0DEB5A2229  407 
1182427844 0 1182427844 text/html 1325/1325 GET 
http://go.microsoft.com/fwlink/?
1182427844.533 RELEASE -1  DEE0F5C0483083C6578A92A5A262DBA8  407 
1182427844 0 1182427844 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427844.868 RELEASE -1  A8ABED5E2C14C5B1E9D0C071634A6A5F  407 
1182427844 0 1182427844 text/html 1325/1325 GET 
http://go.microsoft.com/fwlink/?
1182427844.898 RELEASE -1  8A2AF11EB29DC53BECCE375C51ED2564  407 
1182427844 0 1182427844 text/html 1325/1325 GET 
http://go.microsoft.com/fwlink/?
1182427845.371 RELEASE -1  E376783F93B586292C10EB17CEED8C0D  302 
1182427844-1 1182427784 text/html 135/135 GET 
http://go.microsoft.com/fwlink/?
1182427845.395 RELEASE -1  DB56627F467C065BB2717F8C4807EE04  302 
1182427844-1 1182427784 text/html 135/135 GET 
http://go.microsoft.com/fwlink/?
1182427845.959 RELEASE -1  FC48317C07A19CD1D257DF7931B8CF91  407 
1182427845 0 1182427845 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427845.965 RELEASE -1  9FDB6B061BB1A01FD5774EDCF57BFE72  407 
1182427845 0 1182427845 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427845.968 RELEASE -1  24E1583A4D3FE04F9CC5D92791D8234F  407 
1182427845 0 1182427845 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427846.017 RELEASE -1  307158AE09CFED627438DB4C97BB6DE7  407 
1182427846 0 1182427846 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427848.314 RELEASE -1  B54B1B79B60C0A9EE18BCC5F376CCCF0  407 
1182427848 0 1182427848 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427848.335 RELEASE -1  106150D23930001055AB50F33462E587  407 
1182427848 0 1182427848 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427848.385 RELEASE -1  8F2EB8EA5C13E1999AA8BBA44C8DE2CC  407 
1182427848 0 1182427848 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427848.608 RELEASE -1  9AAF6E2DA487093383A0DD59ADB264B4  407 
1182427848 0 1182427848 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427848.628 RELEASE -1  552B7EA2E74614B8A4E9E82E193FC296  407 
1182427848 0 1182427848 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427848.631 RELEASE -1  B2701012D1DE2296A7678125A6841581  407 
1182427848 0 1182427848 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427848.681 RELEASE -1  6194E73C33414591F76E8645DD78AF71  407 
1182427848 0 1182427848 text/html 1301/1301 CONNECT 
update.microsoft.com:443
1182427848.928 RELEASE -1  2B64CB519E1123FE9772D9D2FD6B9D23  407 
1182427848 0 1182427848 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427848.959 RELEASE -1  BAB09BA63C9B037455216ED743BDE755  407 
1182427848 0 1182427848 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427849.014 RELEASE -1  964028CC20022B536F59877D37745174  407 
1182427849 0 1182427849 text/html 1463/1463 POST 
http://stats.update.microsoft.com/ReportingWebService/ReportingWebService.asmx
1182427850.033 RELEASE -1  36FDA330BD08904D927FB76ABD56B1D1  407 
1182427850 0 1182427850 text/html 1292/1292 CONNECT 
urs.microsoft.com:443
1182427850.075 RELEASE -1  B5335E465AA32ED4259749CBB2AC4236  407 
1182427850 0 1182427850 text/html 1292/1292 CONNECT 
urs.microsoft.com:443
1182427850.127 RELEASE -1  0D4261BD99331073CAE9F2FA94E0EE61  407 
1182427850 0 1182427850 text/html 1292/1292 CONNECT 
urs.microsoft.com:443
1182427850.130 RELEASE -1  32CCE2EA2FB00E6CA57DF5D5F2CC6799  407 
1182427850 0 1182427850 

Re: [squid-users] Squid and Windows Update

2007-06-22 Thread Julian Pilfold-Bagwell

Henrik Nordstrom wrote:

tor 2007-06-21 klockan 14:22 +0100 skrev Julian Pilfold-Bagwell:

  

If I am to guess you might need to allow access to the windows
  

update
  

servers without using authentication.
  


  

Is it possible to do that while retaining authentication for users?



Yes.

Just allow access to the windows update servers before where you
normally require authentication.

Regards
Henrik
  

Hi again,

Does the first acl line: 


acl winupdate dstdomain .microsoft.com .windowsupdate.com

not do this? I put the always_direct rule in before the mynetwork rule 
but it doesn't seem to do the trick.


Thanks,

Jools


Re: [squid-users] Squid and Windows Update - SOLVED!!

2007-06-22 Thread Julian Pilfold-Bagwell

Hi Henrik,

It's cured. You were right about allowing access to winupdate. The 
confusing aspect is that some time back, we had to wrestle for a day to 
get it working after Windows updated itself. It turned out that you had 
to use the always_direct directive to get it work as it would crash out 
otherwise.


Don't know what Microsoft have done to Windows Update but it now has to 
go back to http_allow.


Thanks again, much appreciated,

All the best,

Julian Pilfold-Bagwell


[squid-users] Slow connection through proxy

2007-08-01 Thread Julian Pilfold-Bagwell

Hi All,

I have a problem with my proxy and Windows clients on certain ip ranges 
on my network.


I've just upgraded my network from a single LDAP/Samba server running on 
Mandriva 2007 to  a dual redundant  setup with  DNS, NTP and LDAP 
master/slave on two servers with a  seperate PDC and BDC   pair 
authenticating  and providing file shares. Authentication on the network 
for users is fast as lightning. On the old network I had a Mandriva 2007 
box with Squid proxying and NTLM auth and this machine has been moved to 
the new setup. Clients are spread across three IP ranges 172.20.0., 
172.20.1. and 172.20.2. with the 0 range being assigned static IPs and 
the one and two ranges collecting an IP from DHCPD.


If I connect a client to the network, it obtains an address from the 
DHCP server along with  DNS,  gateway and WINS server settings but the 
connection via Squid is slow e.g. 30-120 seconds to  obtain a page. If I 
take the settings from ipconfig and enter them manually but with an IP 
in the 172.20.0 range, it works perfectly with pages appearing withing 
1-2 seconds.


nslookup returns IP's within a second on the proxy and clients and 
su'ing to a user account on the proxy takes a split second, suggesting 
that nss and pam_smb are authenticating OK.


On the old network, the proxy worked fine across al three IP ranges, on 
the new it behaves as above.  Is there anywhere I should be looking in 
particular for clues to this one. I'll be out of the office until Monday 
but I'll check the mail as soon as I can for a reply.


Many thanks,

Julian PB


Re: [squid-users] SSL errors with Squid 3.5.27

2018-06-09 Thread Julian Perconti
>> https_port 3130 intercept ssl-bump \
>>   cert=/etc/squid/ssl_cert/squidCA.pem \
>>   key=/etc/squid/ssl_cert/squidCA.pem \
>>   generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 
>> tls-dh=/etc/squid/ssl_cert/dhparam.pem
>
>These DH parameters are for old DH not for ECDHE (missing curve name).
>So this may be restricting what your Squid can do to match up the client and 
>server crypto requirements.

Hi Amos,

I have commented the line: "tls-dh=/etc/squid/ssl_cert/dhparam.pem"

And, it seems that many errors (SSL errors) in cache.log have disappeared.
I will confirm later if WhatsApp works from iOS/Android.

Thank You!

PS: I used this option (tls-dh, dhparam, etc..) following the official 
documentation of squid-cache.org for the "hardening" ... or "improve security", 
etc.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL errors with Squid 3.5.27

2018-06-08 Thread Julian Perconti
Hello community, I am new to the list and, I hope everyone is well.

I have running a squid server on debian 7.

My squid version is 3.5.27 manually compiled with LibreSSL 2.6.0 due to
problems with Dropbox. After compiling squid with LibreSSL, the error
"unknown cipher returned" has disappeared and dropbox worked correctly.

Everything works quite well, except that in /var/log/squid/cache.log there
are 5 types of problems (at least):

[1] 2018/06/08 17:14:05 kid1| Error negotiating SSL connection on FD 7:
error:14037418:SSL routines:ACCEPT_SR_KEY_EXCH:tlsv1 alert unknown ca (1/0)
[2] 2018/06/08 17:14:39 kid1| Error negotiating SSL on FD 11:
error:14007086:SSL routines:CONNECT_CR_CERT:certificate verify failed
(1/-1/0)
[3] 2018/06/08 18:35:43 kid1| Error negotiating SSL connection on FD 10:
(104) Connection reset by peer
[4] 2018/06/08 18:56:52 kid1| Error negotiating SSL on FD 13:
error::lib(0):func(0):reason(0) (5/-1/104)
[5] 2018/06/08 19:20:06 kid1| Error negotiating SSL connection on FD 9:
error:06FFF064:digital envelope routines:CRYPTO_internal:bad decrypt (1/-1)

However I think (I'm not sure but ...), that the most serious is the number
[2]:
SSL negotiating error on FD 11: error: 14007086: SSL routines:
CONNECT_CR_CERT:certificate verify failed (1/-1/0)

The problem I have it with WhatsApp from mobile devices ... the application
tries to connect to the network indefinitely without success, and the error
that appears (at that moment) is [2]: (...) certificate verify failed
(1/-1/0)

This is the most relevant configuration of squid currently:

http_port 3128

http_port 3129 intercept

https_port 3130 intercept ssl-bump \
  cert=/etc/squid/ssl_cert/squidCA.pem \
  key=/etc/squid/ssl_cert/squidCA.pem \
  generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
tls-dh=/etc/squid/ssl_cert/dhparam.pem

sslcrtd_program /lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
sslproxy_cafile /etc/squid/ssl_cert/cert.pem # LibreSSL SLL CA Bundle

sslproxy_foreign_intermediate_certs /etc/squid/ssl_cert/intermediate.pem

sslproxy_options SINGLE_DH_USE

sslproxy_cipher
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:E
ECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!
aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS

acl noBumpSites ssl::server_name_regex -i "/etc/squid/url.nobump"

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

ssl_bump peek step1 all
ssl_bump peek step2 nobumpSites
ssl_bump splice step3 nobumpSites
ssl_bump stare step2 all
ssl_bump bump step3 all
(...)

In the file "/etc/squid/url.nobump", I have expressions like these:

(...)
# IM
\.skype\.com$

\.whatsapp\.com$
\.whatsapp\.net$
(...)

I have read whatsapp, facebook, and many others servers use "Certificate
Pinning" to avoid "Man-in-the-middle" attacks.

But I can not find any solution/fix or workaround.
The server certificate is installed on mobile devices. The flaw occurs with
both Android and iOS devices.

Any kind of suggestion is welcome; both if there is something wrong in the
configuration written above, or better yet if someone knows the cause and
solution of this problem.

Thank you very much to all!

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL errors with Squid 3.5.27

2018-06-18 Thread Julian Perconti
Googling i foind this cfg lines:

acl SSLERR ssl_error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT 
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN

sslproxy_cert_error allow SSLERR

sslproxy_cert_error deny all

The error " certificate verify failed has deissappeared, I refer  to this error:

routines:CONNECT_CR_CERT:certificate verify failed (1/-1/0)
2018/06/18 08:22:08.825 kid1| 4,3| errorpage.cc(1100) Convert: errorConvert: 
%%D --> 'Self-signed SSL Certificate in chain: /C=US/O=Apple Inc./OU=Apple 
Certification Authority/CN=Apple Root CA'

But... WhatsApp on iOS devices still not working. 
So that was not the root cause of the problem.

On Android WhatsApp Works fine.

Any other idea?

Thank You.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL errors with Squid 3.5.27

2018-06-18 Thread Julian Perconti
> have you tried -servername option for setting SNI extension?

How can i do this?



Well, debbuging cache.log i found this:

2018/06/18 08:22:08.822 kid1| 83,5| support.cc(300) ssl_verify_cb: Self signed 
certificate in certificate chain: /CN=courier.push.apple.com/O=Apple 
Inc./ST=California/C=US
2018/06/18 08:22:08.822 kid1| 83,7| bio.cc(168) stateChanged: FD 16 now: 0x4008 
3RSC_B (SSLv3 read server certificate B)
2018/06/18 08:22:08.822 kid1| 83,7| bio.cc(168) stateChanged: FD 16 now: 0x1002 
3RSC_B (SSLv3 read server certificate B)
2018/06/18 08:22:08.823 kid1| Error negotiating SSL on FD 16: 
error:14007086:SSL routines:CONNECT_CR_CERT:certificate verify failed (1/-1/0)
2018/06/18 08:22:08.825 kid1| 4,3| errorpage.cc(1100) Convert: errorConvert: 
%%D --> 'Self-signed SSL Certificate in chain: /C=US/O=Apple Inc./OU=Apple 
Certification Authority/CN=Apple Root CA'
2018/06/18 08:22:08.830 kid1| 33,5| client_side.cc(4185) getSslContextStart: 
Generating SSL certificate for courier.push.apple.com using ssl_crtd.
2018/06/18 08:22:08.831 kid1| 33,5| client_side.cc(4189) getSslContextStart: 
SSL crtd request: new_certificate 3294 host=courier.push.apple.com
-BEGIN CERTIFICATE-
-END CERTIFICATE-
2018/06/18 08:22:08.831 kid1| 84,9| helper.cc(386) helperSubmit:  
buf[3316]=new_certificate] 3294 host=courier.push.apple.com
-BEGIN CERTIFICATE-
-END CERTIFICATE-
2018/06/18 08:22:08.835 kid1| 84,9| helper.cc(875) helperHandleRead:  
accumulated[3002]=OK] 2993 -BEGIN CERTIFICATE-
-END CERTIFICATE-

On Android devices WhatsApp Works fine, slow but it woks.

I think that the main problem resides in this line:

ssl_verify_cb: Self signed certificate in certificate chain:

courier.push.apple.com is entrust L1K chain...( if I´m not wrong)

Any idea?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL errors with Squid 3.5.27

2018-06-12 Thread Julian Perconti
>Interesting.
>
>The main issue was that you configured only params for the Diffi-Helman (DH 
>and DHE) ciphers - no >curve name. That meant your specified EEC* ciphers were 
>disabled since they require a curve name as >well.
>
>Removing this option completely disables both DH and ECDH cipher types.
>Leaving your proxy with only the RSA based ciphers.
>
>Amos

kid1| Error negotiating SSL on FD 60: error:14007086:SSL 
routines:CONNECT_CR_CERT:certificate verify failed (1/-1/0)

Hi Amos,

I still have no look to connect with WhatsApp from iOS.

How do I can track this error?:

kid1| Error negotiating SSL on FD 60: error:14007086:SSL 
routines:CONNECT_CR_CERT:certificate verify failed (1/-1/0)

I mean examine the FD, ...or.. what? How? Because from iOS i cant see any 
error, it just tries to connect indefinitely.

Some whatsapp/Facebook server with the command:

Openssl s_client -connect -showcerts x.x.x.x:443 

Does not shows any cert and establishes a connection with TLS 1.2...

Any idea?

Thank You

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL errors with Squid 3.5.27 [SOLVED]

2018-07-02 Thread Julian Perconti
Hi all,

Problem solved.

With squid 4 openssl 1.1

I realized that WhatsApp use the following ports:

5223, 5228, 4244, 5242, and 5222 in addition to 443, 80.

So I opened that ports on the firewall and everythhing worked.

Also I changed the cipher suite in squid.conf like this: (for the dropbox 
client problem)

tls_outgoing_options 
cipher=ALL:!SSLv2:!ADH:!DSS:!MD5:!EXP:!DES:!PSK:!SRP:!RC4:!IDEA:!SEED:!aNULL:!eNULL

software info:

Squid Cache: Version 4.0.25
Service Name: squid

This binary uses OpenSSL 1.1.0f  25 May 2017. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--prefix=/usr' '--build=x86_64-linux-gnu' 
'--localstatedir=/var/squid' '--libexecdir=/lib/squid' '--srcdir=.' 
'--datadir=/share/squid' '--sysconfdir=/etc/squid' '--with-default-user=proxy' 
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' 
'--with-openssl' '--enable-ssl-crtd' '--mandir=/share/man' 
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' 
'--enable-delay-pools' '--enable-icap' '--enable-cache-digests' 
'build_alias=x86_64-linux-gnu' --enable-ltdl-convenience

# openssl version
OpenSSL 1.1.0f  25 May 2017

Thanks everybody.

PS: I think the same would work with squid 3.5.27 on debian 7, because it was a 
firewall problem.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.1 Error negotiating SSL connection

2018-07-03 Thread Julian Perconti
Hi all,

 

I have installed squid 4.1 on debian 9 with openssl 1.1.0f on transparent
mode.

 

I need to know how to track this error: (debbuging options is almost
impossible i mean examine the FD, etc.)

 

kid1| Error negotiating SSL connection on FD 19:
error:0001:lib(0):func(0):reason(1) (1/-1)

 

There are a lot of them in cache.log when mobile devices uses (unsuccefully)
apps like instagram/Pinterest/Facebook/twitter, etc.

 

Neither is a "cipher-out" problem because I just tried: tls_outgoing_options
cipher=ALL (only for testing)

 

>From any PC those sites works well. So there is not a certificate missing
problem.

 

Here a copy of most relevant config: 

 

=CFG==

 

http_port 3128

http_port 3129 intercept

https_port 3130 intercept ssl-bump \

  cert=/etc/squid/ssl_cert/squid4ssl.pem \

  key=/etc/squid/ssl_cert/squid4ssl.pem \

  generate-host-certificates=on dynamic_cert_mem_cache_size=4MB

 

sslcrtd_program /lib/squid/security_file_certgen -s /var/lib/ssl_db -M 4MB

 

tls_outgoing_options cafile=/etc/ssl/certs/ca-certificates.crt

tls_outgoing_options cafile=/etc/squid/ssl_cert/cabundle.pem

tls_outgoing_options options=NO_SSLv3

tls_outgoing_options
cipher=ALL:!SSLv2:!ADH:!DSS:!MD5:!EXP:!DES:!PSK:!SRP:!RC4:!IDEA:!SEED:!aNULL
:!eNULL

 

acl step1 at_step SslBump1

acl step2 at_step SslBump2

acl step3 at_step SslBump3

 

acl noBumpSites ssl::server_name_regex -i "/etc/squid/url.nobump"

 

ssl_bump peek step1 all 

ssl_bump peek step2 noBumpSites 

ssl_bump splice step3 noBumpSites   

ssl_bump stare step2

ssl_bump bump step3 

 

# cache ram

cache_mem 1024 MB

=CFG==

 

And so on..

 

Any suggestiong on the config above? Or a workaround the problem mentioned?

 

Thank you all!

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 4.2 is available

2018-08-10 Thread Julian Perconti
> De: squid-users  En nombre de
> Amos Jeffries
> Enviado el: viernes, 10 de agosto de 2018 02:41
> Para: squid-annou...@lists.squid-cache.org
> Asunto: [squid-users] [squid-announce] Squid 4.2 is available
> 
> The Squid HTTP Proxy team is very pleased to announce the availability of the
> Squid-4.2 release!
> 
> 

Works great! 
And I realized an important performance improvement.

Thank You!

> 
> 
> Amos Jeffries
> ___
> squid-announce mailing list
> squid-annou...@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-announce
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-08-12 Thread Julian Perconti
> -Mensaje original-
> De: Alex Rousskov 
> Enviado el: domingo, 12 de agosto de 2018 20:50
> Para: Julian Perconti ; squid-users@lists.squid-
> cache.org
> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> 
> On 08/12/2018 04:09 PM, Julian Perconti wrote:
> 
> > I would like to know which of these two cfg's are "better" or "more secure"
> > when a site/domain is spliced, bumped, etc.
> 
> It is impossible to answer that question without knowing how _you_ define
> "better" or "more secure".

First of all: I am relative new in the "ssl/tls filtering world". There are 
many things I dont understand very well yet.

You might be right and I probably wrong. 

I tried to meant, "security" from the client-side accessing to a non-bumped or 
spliced site, i.g.: bank website... client-side "privacy" or an a -real- 
man-in-the-middle attack due to squid in the middle.

Is well-known that there is no system /network/o.s. 100% secure but, I dont 
know why, I always thought or stil think that with a https proxy/filtering, the 
security or "the things" tooggles more risky if this one did not exist. Even 
squid 100% correctly configured and server well secured.

> 
> 
> > acl noBumpSites ssl::server_name_regex -i "/etc/squid/url.nobump"
> 
> > # ssl_bump option 1: (with this I don't see the domain in "TUNNEL"
> > line, just the IP addr.)
> >
> > ssl_bump peek step1
> > ssl_bump peek step2 noBumpSites
> > ssl_bump splice step3 noBumpSites
> > ssl_bump stare step2
> > ssl_bump bump step3
> >
> > # ssl_bump option 2: (with this I see the domain in "TUNNEL" line.)
> >
> > ssl_bump peek step1
> > ssl_bump splice noBumpSites
> > ssl_bump bump all
> >
> >
> > And (if possible) could anyone explain the differnce between these 2 cfg's
> ?
> 
> Bugs notwithstanding, Option 1 looks at the TLS server Hello details
> (step2) before splicing or bumping the connections (at step3). Option 2 does
> not -- it splices or bumps based on TLS client Hello info only.
> 

What does squid when I dont specify the step?

For example:

What does squid do with..:
ssl_bump splice step3 noBumpSites

...And what it do instead with this?:
ssl_bump splice noBumpSites

> Option 1 should give Squid/you more information about the server when
> splicing the two connections. For example, you can use server certificate info
> during step3 and when logging.
> 
> Option 1 should give the client more information about the server when
> bumping the client -- the client will get a mimicked server certificate detail
> with this option.
> 
> I believe the information obtained at each step is documented at
> https://wiki.squid-cache.org/Features/SslPeekAndSplice

Yes, but many things are pretty complex to understand well, even making tests.

> 
> Please note that your
> 
>   ssl_bump splice step3 noBumpSites
> 
> is a bit risky because your noBumpSites may match differently on each step
> (as it gets more reliable information). It could match at step2 but not match
> at step3 or vice versa, but the decision to splice (or bump) is essentially 
> made
> at step2 -- if you peeked at step2, then you should be splicing or terminating
> at step3 (and if you stared at step2, then you should be bumping or
> terminating at step3). Your rules may not follow that principle if noBumpSites
> matching changes.

I Will consider this. 

So, Would You prefer option 2? For now, I am testing that option.

>
> 
> > with Option 1 I don't see the domain in "TUNNEL" line, just the IP
> > addr.)
> 
> I doubt that is how it is supposed to work. When splicing, Option 1 should
> have the same or more information so it should log the domain name if
> Option 2 has the domain name. If you are comparing log lines for identical
> transactions, then this could be a Squid bug.
> 

I dont know, I just tell what happen in the access.log when I switching between 
these ssl_bump configs.

> Alex.

Thank You

P.S.: squid versión 4.2 on debian 9.5

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] About SSL peek-n-splice/bump configurations

2018-08-12 Thread Julian Perconti
Hi,

I would like to know which of these two cfg's are "better" or "more secure"
when a site/domain is spliced, bumped, etc.

Here the lines...

# mandatory lines:

acl noBumpSites ssl::server_name_regex -i "/etc/squid/url.nobump"

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

___

# ssl_bump option 1: (with this I don't see the domain in "TUNNEL" line,
just the IP addr.)

ssl_bump peek step1
ssl_bump peek step2 noBumpSites
ssl_bump splice step3 noBumpSites
ssl_bump stare step2
ssl_bump bump step3

# ssl_bump option 2: (with this I see the domain in "TUNNEL" line.)

ssl_bump peek step1
ssl_bump splice noBumpSites
ssl_bump bump all


And (if possible) could anyone explain the differnce between these 2 cfg's ?

The peek-n-splice config (for me) was never clearly.

Thank you in advance,

All the best

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-06 Thread Julian Perconti
> De: Alex Rousskov 
> Enviado el: lunes, 13 de agosto de 2018 02:01
> Para: Julian Perconti ; squid-users@lists.squid-
> cache.org
> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> 
> On 08/12/2018 06:57 PM, Julian Perconti wrote:
> >> De: Alex Rousskov 
> >> Enviado el: domingo, 12 de agosto de 2018 20:50
> >> Para: Julian Perconti ;
> >> squid-users@lists.squid-cache.org
> >> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> >>
> >> On 08/12/2018 04:09 PM, Julian Perconti wrote:
> >>> I would like to know which of these two cfg's are "better" or "more
> secure"
> >>> when a site/domain is spliced, bumped, etc.
> 
> >> It is impossible to answer that question without knowing how _you_
> >> define "better" or "more secure".
> 
> 
> > I tried to meant, "security" from the client-side accessing to a
> > non-bumped or spliced site, i.g.: bank website... client-side
> > "privacy" or an a -real- man-in-the-middle attack due to squid in the
> > middle.
> 
> A splicing Squid does not perform a man-in-the-middle attack on TLS or HTTP
> traffic. It essentially acts as a TCP/IP-level proxy and can log TLS handshake
> details. In some environments, doing all that improves "privacy" and
> "security". In others, it makes things worse (for some definition of "privacy"
> and "security").
> 
> A bumping Squid performs a man-in-the-middle attack on TLS traffic.
> After a successful attack, it essentially acts as an HTTP-level proxy and can 
> log
> or even alter TLS and HTTP traffic. In some environments, doing all that
> improves "privacy" and "security" (for some definition of "privacy" and
> "security"). In others, it makes things worse.
> 
> You would have to ask a much more specific question to get a more specific
> (but still correct) answer.
> 
> 
> > Is well-known that there is no system /network/o.s. 100% secure but, I
> > dont know why, I always thought or stil think that with a https
> > proxy/filtering, the security or "the things" tooggles more risky if
> > this one did not exist. Even squid 100% correctly configured and
> > server well secured.
> 
> There are examples where deploying a splicing or even bumping Squid
> improves security of the humans and/or machines that are trusting Squid to
> examine and/or police their traffic. There are counter-examples as well. And
> I am sure that many installations can be viewed as both, depending on who
> gets to define "privacy", "security", and the "right balance" between the
> two.
> 
> 
> > What does squid when I dont specify the step?
> 
> Bugs notwithstanding, Squid should either
> 
> * bump if you were staring during the previous (explicitly configured) step or
> 
> * splice otherwise (including cases when no previous step was explicitly
> configured or existed).
> 
> I would not rely on this (correct) behavior without testing (at least) your
> Squid version (at least). I know that early SslBump implementations had bugs
> in that area.
> 
> 
> > For example:
> >
> > What does squid do with..:
> > ssl_bump splice step3 noBumpSites
> 
> Assuming there are no other rules, Squid should splice at step1 (see the
> "splice otherwise" rule above).
> 
> 
> > ...And what it do instead with this?:
> > ssl_bump splice noBumpSites
> 
> Assuming there are no other rules, Squid should splice at step1. It will do 
> that
> when noBumpSites matches (naturally) and if noBumpSites does not match
> (per the "splice otherwise" rule above).
> 
> 
> > So, Would You prefer option 2?
> 
> Sorry, I cannot answer this question -- too many unknown variables. It is like
> asking a doctor whether she prefers to treat the patient with drug A or drug
> B when the doctor does not know what the patient is suffering from and
> what the patient's treatment preferences/goals are.
> 
> 
> >>> with Option 1 I don't see the domain in "TUNNEL" line, just the IP
> >>> addr.)
> 
> >> I doubt that is how it is supposed to work. When splicing, Option 1
> >> should have the same or more information so it should log the domain
> >> name if Option 2 has the domain name. If you are comparing log lines
> >> for identical transactions, then this could be a Squid bug.
> 
> > I dont know, I just tell what happen in the access.log when I
> > switching between these ssl_bum

Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-07 Thread Julian Perconti
> De: squid-users  En nombre de
> Amos Jeffries
> Enviado el: viernes, 7 de septiembre de 2018 01:18
> Para: squid-users@lists.squid-cache.org
> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> 
> On 7/09/18 1:48 PM, Julian Perconti wrote:>
> > Hi all,
> >
> > I have a new strange situation:
> >
> > With this peek-n-splice configuration:
> >
> > ssl_bump peek step1 all
> > ssl_bump peek step2 noBumpSites
> > ssl_bump splice step3 noBumpSites
> > ssl_bump bump
> 
> So... (lets call this config A)
> 
> #step1 does this:
> 
> > ssl_bump peek step1 all
> 
> #step2 does this:
> 
> > ssl_bump peek step2 noBumpSites
> > ssl_bump bump
> 
> If the bump at step2 happened, there is no step3.
> 
> #step3 does this:
> 
> > ssl_bump splice step3 noBumpSites
> 
> 
> 
> >
> > I got this error on spliced sites (a bank site):
> >
> > The system return in the browser this error: (chrome 69):
> >
> > (104) Connection reset by peer (TLS code: SQUID_ERR_SSL_HANDSHAKE)
> > Handshake with SSL server failed: [No Error]
> >
> > This proxy and the remote host failed to negotiate a mutually acceptable
> security settings for handling your request. It is possible that the remote 
> host
> does not support secure connections, or the proxy is not satisfied with the
> host security credentials.
> >
> > cache.log:
> > 2018/09/06 22:40:36 kid1| ERROR: negotiating TLS on FD 44:
> > error::lib(0):func(0):reason(0) (5/-1/104)
> >
> > But if i change the ssl bump(s) directive to:
> >
> > ssl_bump peek step1
> > ssl_bump splice noBumpSites
> > ssl_bump bump all
> >
> 
> So ... (lets call this config B)
> 
> #step does this:
> 
> > ssl_bump peek step1
> 
> #step2 does this:
> 
> > ssl_bump splice noBumpSites
> > ssl_bump bump all
> 
> Notice there is never any step3, and the splice in this ruleset happens at
> step2.
> 
> 
> So config (A) is trying to do a step3 (handshake with server) when it has only
> peek'ed and relayed the clientHello as-is (including any secret tokens an
> unknown features the client is trying to use). The bump action is bound to
> fail.
>  ** "stare" is the action which sets up and filters the handshake ready for
> bump action at step3 (server handshake with TLS features Squid knows how
> to handle).


So from http://marek.helion.pl/install/squid.html

We have this configs:

ssl_bump peek step1 all
ssl_bump peek step2 noBumpSites
ssl_bump splice step3 noBumpSites
ssl_bump stare step2
ssl_bump bump step3

Is better to use the above conf (staring at step2)? Because you said that bump 
at step2 is insecure.

Is the same if a I change the order of the above conf to:

ssl_bump peek step1 all
ssl_bump peek step2 noBumpSites
ssl_bump stare step2 <<< order changed
ssl_bump splice step3 noBumpSites
ssl_bump bump step3

> 
> 
> The config (B) bumps at step2. That is what the old and very broken "client-
> first" behaviour used to be. It does not produce any errors from the proxy
> BUT leads directly to a huge pile of security vulnerabilities and nasty side
> effects that may never be seen by you. Use at your own risk.
> 
> 

So in a brief I think that  config A is more secure.

> 
> > I can Access to spliced site and no any kind of errors in access.log
> >
> > Any idea?
> 
> Have you read the documentation?
>  <https://wiki.squid-cache.org/Features/SslPeekAndSplice>

Yes I did, but the thing is (still for me) a bit complex, see what the autor of 
the link posted above said about the squid TLS.

> 
> Break your rules down into the stages as I have above and what is going on
> becomes a bit more clear.
> 
> Then you can consider what ssl_bump is doing in terms of what info Squid
> has available.
>  step1: TCP IP:port or CONNECT URI (forward-proxy only)
>  step2: TLS clientHello + TLS SNI (if any)
>  step3: TLS serverHello + server cert
> 
> The entire directive set is interpreted from top-to-bottom left-to-right each
> step. First line to fully match is what happens for that step.

Above in the current thread, there is a question about the order of steps.

However I test (today) the site that caused (yesterday) the handshake problem 
and with the original config and now works, so I dont know what to think what 
could be happened.
I refer to this with the term "original config":

ssl_bump peek step1 all
ssl_bump peek step2 noBumpSites
ssl_bump splice step3 noBumpSites
ssl_bump bump


Thank You

> 
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-08 Thread Julian Perconti
> -Mensaje original-
> De: squid-users  En nombre de
> Amos Jeffries
> Enviado el: viernes, 7 de septiembre de 2018 15:19
> Para: squid-users@lists.squid-cache.org
> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> 
> > So from http://marek.helion.pl/install/squid.html
> >
> > We have this configs:
> >
> > ssl_bump peek step1 all
> > ssl_bump peek step2 noBumpSites
> > ssl_bump splice step3 noBumpSites
> > ssl_bump stare step2
> > ssl_bump bump step3
> >
> > Is better to use the above conf (staring at step2)? Because you said that
> bump at step2 is insecure.
> >
> > Is the same if a I change the order of the above conf to:
> >
> > ssl_bump peek step1 all
> > ssl_bump peek step2 noBumpSites
> > ssl_bump stare step2 <<< order changed ssl_bump splice step3
> > noBumpSites ssl_bump bump step3
> >
> 
> What exactly do you think the step1, step2, step3 ACLs here are doing?

I don not know what -exactly- these ACL are doing; that is what I trying to 
find out.
I have some ideas about them, but not the exactly knowledge, for that reason I 
asked if there is difference between those 2 configs order (because the step is 
the same)

> 
> I hoped it is obvious, but maybe not. Understanding that detail should
> help resolve at least some of your confusion about these config snippets
> and how "tiny" changes to them are affecting Squid behaviour in major ways.
> 

No, it isn't obviuos to me, and yes, I am still trying to understand by 
re-reading wiki squid doc and other sites about peek and splice decisions and 
about the steps too.

> 
> >
> > So in a brief I think that  config A is more secure.
> >
> 
> No. Config (A) from the earlier post actively *creates* insecurity by;

But,according to http://marek.helion.pl/install/squid.html; It's supposed that 
config  "A" check server certificate. Because it is peeking at step2 and 
splicing at step3 the whitelist sites.

> 
>  1) hiding any information about the real server security level,
> - downgrade attacks. Right down to plaintext levels.
> 
>  2) hiding any information about the server certificate validity,
> - silent third-party MITM.
> - invalid certificate attacks.
> 
>  3) opening the server connection to multiplexed use from multiple
> clients of Squid,
>- consider that in light of (1) and (2)

I dont understand, in earlier post:

>> ssl_bump peek step1 all
>> ssl_bump peek step2 noBumpSites
>> ssl_bump splice step3 noBumpSites
>> ssl_bump bump
>>
>So... (lets call this config A)

In this config I think the problem is that squid is peeking at step2 
noBumpSites; but also bump all other sites at step2 (there is no step specified 
in the last line -the bump-)

Therefore I think that would be "better" or " less insecure" bumping at step3.

Conlusion based on these words:

> So config (A) is trying to do a step3 (handshake with server) when it 
> has only peek'ed and relayed the clientHello as-is (including any 
> secret tokens an unknown features the client is trying to use). The 
> bump action is bound to fail.
>  ** "stare" is the action which sets up and filters the handshake 
> ready for bump action at step3 (server handshake with TLS features 
> Squid knows how to handle).

I think that my config would be something like this:

ssl_bump peek step1 all

ssl_bump peek step2 noBumpSites 

From squid doc:
"When a peek rule matches during step 2, Squid proceeds to step3 where it 
parses the TLS Server Hello and extracts server certificate while preserving 
the possibility of splicing the client and server connections; peeking at the 
server certificate usually precludes future bumping"

And from http://marek.helion.pl/install/squid.html
Peeking at step 2 will check the name stored in server certificate (CommonName, 
SubjectAltName) as well. So let's do it! you must enable peek at step 2 and 
finally splice at step3 (if certName matches the whitelist)

ssl_bump splice step3 noBumpSites
(following the ruleset explained above)

And here I believe that the final bump should be make at step3:

ssl_bump bump step3 

OR there is no difference if i dont specify the step in the bump line?

summarizing:

ssl_bump peek step1 all
ssl_bump peek step2 noBumpSites
ssl_bump splice step3 noBumpSites
ssl_bump bump step3

EDIT:
Before send this mail to list, I test the squid behaviour and if I dont add a 
stare (dont know why this happens, stare is the less used option I saw in many 
examples on the web) line at step2, all accesed site are spliced; so my final 
config would be taken from helion.pl: (almost defeated by this thread/topic)

ssl_bump peek step1 all # at step 1 we're peeking at client 
TLS-request in order to find the "SNI"
ssl_bump peek step2 nobumpSites # here we're peeking at server 
certificate
ssl_bump splice step3 nobumpSites   # here we're splicing connections which 
match the whitelist
ssl_bump stare step2# here we're staring at server 
certificate
ssl_bump 

Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-09 Thread Julian Perconti
> -Mensaje original-
> De: squid-users  En nombre de
> Amos Jeffries
> Enviado el: domingo, 9 de septiembre de 2018 02:35
> Para: squid-users@lists.squid-cache.org
> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> 
> On 9/09/18 5:45 AM, Julian Perconti wrote:
> >> -Mensaje original-
> >> De: squid-users  En nombre
> >> de Amos Jeffries Enviado el: viernes, 7 de septiembre de 2018 15:19
> >> Para: squid-users@lists.squid-cache.org
> >> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> >>
> >>> So from http://marek.helion.pl/install/squid.html
> >>>
> >>> We have this configs:
> >>>
> >>> ssl_bump peek step1 all
> >>> ssl_bump peek step2 noBumpSites
> >>> ssl_bump splice step3 noBumpSites
> >>> ssl_bump stare step2
> >>> ssl_bump bump step3
> >>>
> >>> Is better to use the above conf (staring at step2)? Because you said
> >>> that
> >> bump at step2 is insecure.
> >>>
> >>> Is the same if a I change the order of the above conf to:
> >>>
> >>> ssl_bump peek step1 all
> >>> ssl_bump peek step2 noBumpSites
> >>> ssl_bump stare step2 <<< order changed ssl_bump splice step3
> >>> noBumpSites ssl_bump bump step3
> >>>
> >>
> >> What exactly do you think the step1, step2, step3 ACLs here are doing?
> >
> > I don not know what -exactly- these ACL are doing; that is what I trying to
> find out.
> > I have some ideas about them, but not the exactly knowledge, for that
> > reason I asked if there is difference between those 2 configs order
> > (because the step is the same)
> 
> Okay. When ssl_bump is being processed the first time SslBump1 matches as
> true, the second time SslBump2 is true, and third time for SslBump3.
> Outside their own step in the TLS handshake process they match false.
> 
> This is how you select that a certain line in ssl_bump is *only* to match and
> happen at a certain part (step) of the handshake sequence.

Well. 

First of all thank You for your time and explanation, and patient of course. 
It's much appretiated.

I Hope this thread helps to others that have a similar confusion and doubts 
like me.
Still the things are not "entirely" clear, I will quote.


...So that means that squid processes the SslBump directives:

1: maybe more than one time in a single request...?

2: In a sequential order (as You or Alex said in an earlier post)

- ... and "automagically" determine what to do if the ACL match or not? 
With this I mean, for example.., that in a config could be first, in this 
order, a step1 directive then a step3 directive and finally a step2? With an 
ACL of course. 
To clarify the SslBump order is determinant but its also depends in what I want 
to do with steps and ACLs.

Lets say...it is *not* mandatory to tell squid SslBump steps directives like:

At step1 do x
At step 2 do y
At step3 do z

And so on...

> 
> >
> >>
> >> I hoped it is obvious, but maybe not. Understanding that detail
> >> should help resolve at least some of your confusion about these
> >> config snippets and how "tiny" changes to them are affecting Squid
> behaviour in major ways.
> >>
> >
> > No, it isn't obviuos to me, and yes, I am still trying to understand by re-
> reading wiki squid doc and other sites about peek and splice decisions and
> about the steps too.
> >
> >>
> >>>
> >>> So in a brief I think that  config A is more secure.
> >>>
> >>
> >> No. Config (A) from the earlier post actively *creates* insecurity
> >> by;
> >
> > But,according to http://marek.helion.pl/install/squid.html; It's supposed
> that config  "A" check server certificate. Because it is peeking at step2 and
> splicing at step3 the whitelist sites.
> >
> 
> The peek at step2 line has another ACL condition which must _also_ be true
> for peek to actually happen. In every transaction where that noBumpSites is
> *false* the ssl_bump ACL processing continues on and finds the "bump" line.
> 
> (that much is just regular ACL processing logic, not SSL-Bump specific).
> 
> 
> Also, Marek is another slightly confused admin like yourself. So that page
> follows what he understands and has a few mistakes. It is also from
> 2 years ago, since then we have fixed some bugs and TLS has had features
> added and removed (notably TLS/1.3 rollout begun and SSL formally
> obsoleted).
> 
> >>
> >>  1) hid

Re: [squid-users] Squid and DNS

2018-09-06 Thread Julian Perconti
> De: squid-users  En nombre de
> Amos Jeffries
> Enviado el: jueves, 6 de septiembre de 2018 09:57
> Para: squid-users@lists.squid-cache.org
> Asunto: Re: [squid-users] Squid and DNS
> 
> On 6/09/18 7:22 PM, Matus UHLAR - fantomas wrote:
> > On 06.09.18 02:40, Julian Perconti wrote:
> >
> >> Is mandatory for squid to use only 1 dns/caching nameserver?
> >
> > usually, people have multiple DNS servers configured to fail over in
> > case one of them fails.
> > in some cases, client can balance the load, or prefer server with
> > faster responses.
> >
> > There should be no problem of this kind, unless one of your DNS
> > servers is broken.
> >
> 
> If it wasn't clear already, yes Squid can use multiple resolvers BUT they need
> to be recursive resolvers and every one of them needs to be able to resolve
> *all* possible domains Squid will ask about.
> 
> You cannot have one of them being authoritative for local domain names and
> another for public queries. DNS don't work like that.

So squid can not use one resolver for a local and public domains/addresses and 
other or a second resolver to only public domains/ip? Both recursive resolvers.

Am i right?

> 
> Amos
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and DNS

2018-09-06 Thread Julian Perconti
> > So squid can not use one resolver for a local and public domains/addresses
> and other or a second resolver to only public domains/ip? Both recursive
> resolvers.
> >
> 
> Correct.

Thank you for the clarification.

> 
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-10 Thread Julian Perconti
> -Mensaje original-
> De: squid-users  En nombre de
> Amos Jeffries
> Enviado el: lunes, 10 de septiembre de 2018 01:13
> Para: squid-users@lists.squid-cache.org
> Asunto: Re: [squid-users] About SSL peek-n-splice/bump configurations
> 
> >
> > ...So that means that squid processes the SslBump directives:
> >
> > 1: maybe more than one time in a single request...?
> >
> 
> Yes. Up to 3 times. A peek or splice action causes another check later.
> 
> 
> > 2: In a sequential order (as You or Alex said in an earlier post)
> >
> > - ... and "automagically" determine what to do if the ACL match or not?
> > With this I mean, for example.., that in a config could be first, in this 
> > order,
> a step1 directive then a step3 directive and finally a step2? With an ACL of
> course.
> 
> No, this order is fixed and follows the TLS handshake stages/steps:
>  step1, then step2, then step3. Exact same order as on the Squid wiki page.
> 
> The automagic is only applied when
>  a) no ssl_bump lines at all match (auto-decide for you), and
>  b) an action that matches is not valid for the step (auto-ignore that line).
> 
> 
> > To clarify the SslBump order is determinant but its also depends in what I
> want to do with steps and ACLs.
> >
> 
> Yes. Though what you understand by that statement still seems to differ a bit
> from what we understand it to mean.
> 
> 
> > Lets say...it is *not* mandatory to tell squid SslBump steps directives 
> > like:
> >
> > At step1 do x
> > At step 2 do y
> > At step3 do z
> >
> > And so on...
> >
> 
> Well, its true you don't *have* to . BUt also you don't have to use SSL-Bump
> at all either.
> 
> If you want to be sure what Squid is doing, and that it will continue to do 
> that
> reliably then telling it for each step is a good idea.

Yes, but see below..what my conclusión is.

> 
> 
> > When I should stare?
> 
> When you, as the admin with meta knowledge about the overall policy -
> know that a bump is wanted to happen later.
> 
> 
> >
> >
> > Peeking at step2 does not prevent this?
> 
> Peeking at step2 precludes / forbids later bumping, so yes.
> 
> What I have been trying to highlight is that there is traffic that config (A)
> allows to go through *without* any peek at step2. It reaches the "ssl_bump
> bump" line.
> 
> 
> 
> >
> >
> > Quick answer: Bump.

A better term would have been "Short answer"

> >
> 
> Then put the below line after your "peek step2 noBumPSites" line:
> 
>   ssl_bump stare step2
> 
> 
> >
> > Wouldn't be less ambiguous to squid if I do this change:?
> >
> > ssl_bump peek step1 all > A question: I am not here peeking the
> noBumpSites list too? Should I add an !noBumpSites? to the end of this line?
> Just a doubt.
> > ssl_bump peek step2 nobumpSites
> > ssl_bump splice step3 nobumpSites
> > ssl_bump stare step2 nobumpSites > explicit staring whitelist (I haven't 
> > test
> this) it is just an idea...it make sense?
> 
> 
> The ACLs on the line above are the same as the peek line earlier. So the
> peek line matched already, nothing reaches this line.
>  
> 
> Less ambiguous, yes, if your knowledge of Squid ACLs is low. The FAQ
> link above should help a bit here.
> 
> 
> Your policy ("Quick answer: Bump.") was to prefer bump'ing. For that to
> happen as step 3 it needs a stare first at step 2.
> 
> So consider the stare here as the normal action this step2 is supposed
> to perform. With the peek line being the whitelist preventing stare+bump
> for special cases.
> 
> >  (also I dont understand what exactly the satre action do)
> 
> Hmm. Think of "peek" as a postal worker reading postcards people send in
> the mail. "stare" as the postal worker both reading and rewriting them
> to remove words (s)he doesn't like or understand.
> 
> Say if the a postcard ended with the words "never qwertyuio". A peek'ing
> postie would still deliver it unchanged, a stare'ing postie would
> deliver a postcard with the last word "never".
> 
> If the sender/receiver of the postcard had agreed to start using crypto
> every time a message ended with "qwertyuio" - the peeking postie would
> then just see a bunch of garbage/crypted postcards start to happen. The
> stare'ing one would be able to read the content, maybe even continue
> changing things.
> 
> The exact details are more complex of course, but essentially the same
> things going on.
> 
> 
> 
> > I think this config avoid the "old client-first insecure" behaviour. I am 
> > right?
> And squid check server-certitifacte before splice.
> >
> 
> Step 3 is where the "preclude" starts to matter.
> 
> 
> The Step 2 action determines whether the original clientHello or one
> rewritten by Squid gets sent to the server in order to get a serverHello
> out of it.
> 
> AIUI, "stare step2" precludes "splice step3". So that line should be
> ignored by Squid unless there was a peek done at step2.
> 
> To follow that postal analogy; client-first is like the postal worker

Re: [squid-users] Squid 4.1 Error negotiating SSL connection

2018-07-06 Thread Julian Perconti
>>>>  El ‎miércoles‎, ‎4‎ de ‎julio‎ de ‎2018‎ ‎01‎:‎21‎:‎12‎ ‎-03, Amos 
>>>>Jeffries  escribió: 
>>>>  
>>>> 
>>>>  
>>>> 
>>>>  
>>>> On 04/07/18 12:06, Julian Perconti wrote:
>>>>> Hi all,
>>>>> 
>>>>>  
>>>>> 
>>>>> I have installed squid 4.1 on debian 9 with openssl 1.1.0f on
>>>>> transparent mode.
>>>>> 
>>>>>  
>>>>> 
>>>>> I need to know how to track this error: (debbuging options is almost
>>>>> impossible i mean examine the FD, etc.)
>>>>> 
>>>> 
>>>> The SSL-Bump activity is fairly complex at times and involves many
>>>> different layers and components. So an ALL,9 or ALL,7 debug log may be
>>>> necessary to trace the actions.
>>>> 
>>>>>  
>>>>> 
>>>>> kid1| Error negotiating SSL connection on FD 19:
>>>>> error:0001:lib(0):func(0):reason(1) (1/-1)
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> Those annoyingly opaque error messages are produced by your OpenSSL 
>>>> library.
>>>> 
>>>> Other programs showing that same string apparently are negotiating
>>>> protocol version for the messaging layer or handshake format which are
>>>> incompatible with the choice of ciphers. eg SSLv2 message syntax with
>>>> TLS ciphers, or SSLv3 message syntax with  TLS/1.2-only ciphers.
>>>> 
>>>> Since you have done the cipher test, it may be the SSLv2 issue or some
>>>> TLS extension being attempted.
>>>> 
>>>> 
>>>> If cache.log is too obscure a packet trace with wireshark may be less
>>>> so. The clear-text part of TLS at the start should have better hints
>>>> about the issue, whatever it is.
>>>> 
>>>> 
>>>>  
>>>>> 
>>>>> There are a lot of them in cache.log when mobile devices uses
>>>>> (unsuccefully) apps like instagram/Pinterest/Facebook/twitter, etc.
>>>>> 
>>>>>  
>>>>> 
>>>>> Neither is a “cipher-out” problem because I just tried:
>>>>> tls_outgoing_options cipher=ALL (only for testing)
>>>>> 
>>>> 
>>>> This test is mistaken.
>>>> 
>>>> "cipher=ALL" and "options=ALL" actually mean to actively *enable* lots
>>>> of things OpenSSL would normally disable. This still counts as
>>>> restriction, because only things compatible with the most obsolete or
>>>> broken cipher/option can be negotiated.
>>>> 
>>>> A correct test would be to _remove_ the cipher=* option entirely from
>>>> your config and see what changes.
>>>> 
>>>> With no manual restrictions the issues are then limited to natural
>>>> differences in OpenSSL version between client and Squid.
>>>> 
>>>> 
>>>>> 
>>>>> From any PC those sites works well. So there is not a certificate
>>>>> missing problem.
>>>>> 
>>>> 
>>>> When SSL-Bump is done crypto issues are the union of configured
>>>> capabilities at client (PC), proxy (Squid), server - plus the 3
>>>> particular crypto libraries on each of those uses. So 6 possible points
>>>> of failure, all affecting each other.
>>>> 
>>>> I find it is often a LOT easier (and more successful) to look at the TLS
>>>> handshake itself and see what is actually happening. Then figure out
>>>> from there what needs tuning to work around it.
>>>> 
>>>> 
>>>>> 
>>>>> Here a copy of most relevant config:
>>>>> 
>>>>>  
>>>>> 
>>>>> =CFG==
>>>>> 
>>>>>  
>>>>> 
>>>>> http_port 3128
>>>>> 
>>>>> http_port 3129 intercept
>>>>> 
>>>>> https_port 3130 intercept ssl-bump \
>>>>> 
>>>>>   cert=/etc/squid/ssl_cert/squid4ssl.pem \
>>>>> 
>>>>>   key=/etc/squid/ssl_cert/squid4ssl.pem \
>>>>> 
>>>>>   generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>>>>> 
>>>>>  
>>>>> 
>>>>> sslcrtd_program /lib/squid/security_file_certgen -s /var/lib/ssl_db -M 4MB
>>>>> 
>>>>>  
>>>>> 
>>>>> tls_outgoing_options cafile=/etc/ssl/certs/ca-certificates.crt
>>>>> 
>>>>> tls_outgoing_options cafile=/etc/squid/ssl_cert/cabundle.pem
>>>>> 
>>>>> tls_outgoing_options options=NO_SSLv3
>>>>> 
>>>> 
>>>> This NO_SSLv3 may be part of issue. AFAIK when SSLv3 compatibility is no
>>>> longer required the latest OpenSSL is able to move to pure TLS message
>>>> syntax which has a few usually very minor differences which TLS/1.3 uses.
>>>> 
>>>> The services you mention are the ones IME most likely to be adopting
>>>> TLS/1.3 already when clients like your Squid accept it. Which is where
>>>> PC vs Squid library differences can lead to drastically different
>>>> visible outcomes.
>>>> 
>>>> 
>>>>> tls_outgoing_options
>>>>> cipher=ALL:!SSLv2:!ADH:!DSS:!MD5:!EXP:!DES:!PSK:!SRP:!RC4:!IDEA:!SEED:!aNULL:!eNULL
>>>>> 
>>>> 
>>>> 
>>>> HTH
>>>> Amos
>>>> ___
>>>> squid-users mailing list
>>>> squid-users@lists.squid-cache.org
>>>> http://lists.squid-cache.org/listinfo/squid-users



Hi Amos,

The temporary workaround I found was add the domains that causes those errors 
to the splice domains list. 

So, in cache.log now I only can see "TCP_TUNNEL" and (almost all) the Apps on 
mobile phones work fine. 

Thank You. 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pools in squid4 not working with https

2018-07-12 Thread Julian Perconti
> De: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Enviado el: jueves, 12 de julio de 2018 20:31
> Para: Julian Perconti ; squid-users@lists.squid-
> cache.org
> Asunto: Re: [squid-users] Delay pools in squid4 not working with https
> 
> On 07/12/2018 05:19 PM, Julian Perconti wrote:
> 
> > From my side, the tests were done with full SSL-Bump; downloading a
> > file from: https://speed.hetzner.de/
> >
> > No splice.
> 
> My "not working" statement was specific to tunneling code. When Squid
> bumps, it does not tunnel, so your tests did not tickle the broken code.
> We do not yet know whether prazola is bumping HTTPS traffic.
> 
> Tunneling happens when handling CONNECT requests without SslBump and
> when splicing TLS traffic with SslBump.
> 
> Alex.

My delay_pool cfg is working.

Without splice/tunneling the connection.
When I download a file from https://speed.hetzner.de/ with  the https prefix in 
the URL  downloaded file (without splice anything), the delay slows down the 
download once the limit is reached.

May be I missunderstood something.

Regards


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pools in squid4 not working with https

2018-07-12 Thread Julian Perconti
> -Mensaje original-
> De: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Enviado el: jueves, 12 de julio de 2018 21:20
> Para: Julian Perconti ; squid-users@lists.squid-
> cache.org
> Asunto: Re: [squid-users] Delay pools in squid4 not working with https
> 
> On 07/12/2018 06:16 PM, Julian Perconti wrote:
> >> De: Alex Rousskov
> >> If you start splicing/tunneling, it will probably stop working.
> 
> 
> > Ok, but is not is supposed that this is the normal behaviour?
> 
> 
> No, Squid should apply delay pools to all traffic.

OK I Will splice https://speed.hetzner.de/ and then tell You what happened with 
delay pool.

An important thing, the delay_pool cfg that Paolo has is pretty complexthan 
mine.

> 
> 
> > I mean, TCP_TUNNEL = squid forward, so squid can not do nothing about
> the spliced connection.
> 
> 
> Squid knows how many bytes it is forwarding, and that is all Squid needs to
> know to shape traffic.
> 
> Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pools in squid4 not working with https

2018-07-12 Thread Julian Perconti
> -Mensaje original-
> De: Julian Perconti [mailto:vh1...@yahoo.com.ar]
> Enviado el: jueves, 12 de julio de 2018 21:24
> Para: 'squid-users@lists.squid-cache.org'  cache.org>
> Asunto: RE: [squid-users] Delay pools in squid4 not working with https
> 
> > -Mensaje original-
> > De: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> > Enviado el: jueves, 12 de julio de 2018 21:20
> > Para: Julian Perconti ; squid-users@lists.squid-
> > cache.org
> > Asunto: Re: [squid-users] Delay pools in squid4 not working with https
> >
> > On 07/12/2018 06:16 PM, Julian Perconti wrote:
> > >> De: Alex Rousskov
> > >> If you start splicing/tunneling, it will probably stop working.
> >
> >
> > > Ok, but is not is supposed that this is the normal behaviour?
> >
> >
> > No, Squid should apply delay pools to all traffic.

Ok, I did not know that..

> 
> OK I Will splice https://speed.hetzner.de/ and then tell You what happened
> with delay pool.
> 
> An important thing, the delay_pool cfg that Paolo has is pretty complexthan
> mine.

Confirmed.

Splicing.. speed.hetzner.de

TCP_TUNNEL/200 4452 CONNECT 88.198.248.254:443 - ORIGINAL_DST/88.198.248.254 -

The delay_pool does not work.

Download speed never goes down.

delay_pool class 2 cfg:

delay_pools 1 
delay_class 1 2
delay_access 1 allow all

delay_parameters 1 -1/-1 10/104857600

Version:

Squid Cache: Version 4.1
Service Name: squid

This binary uses OpenSSL 1.1.0f  25 May 2017. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

> 
> >
> >
> > > I mean, TCP_TUNNEL = squid forward, so squid can not do nothing
> > > about
> > the spliced connection.
> >
> >
> > Squid knows how many bytes it is forwarding, and that is all Squid
> > needs to know to shape traffic.
> >
> > Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pools in squid4 not working with https

2018-07-12 Thread Julian Perconti
> -Mensaje original-
> De: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Enviado el: jueves, 12 de julio de 2018 21:03
> Para: Julian Perconti ; squid-users@lists.squid-
> cache.org
> Asunto: Re: [squid-users] Delay pools in squid4 not working with https
> 
> On 07/12/2018 05:42 PM, Julian Perconti wrote:
> >> De: Alex Rousskov
> >> On 07/12/2018 05:19 PM, Julian Perconti wrote:
> >>
> >>> From my side, the tests were done with full SSL-Bump; downloading a
> >>> file from: https://speed.hetzner.de/
> >>>
> >>> No splice.
> 
> 
> >> My "not working" statement was specific to tunneling code. When Squid
> >> bumps, it does not tunnel, so your tests did not tickle the broken code.
> >> We do not yet know whether prazola is bumping HTTPS traffic.
> >>
> >> Tunneling happens when handling CONNECT requests without SslBump
> and
> >> when splicing TLS traffic with SslBump.
> 
> 
> > My delay_pool cfg is working.
> 
> Yes, I understand. I do not think anybody has claimed that your config should
> not be working. The only claim was that delay pools do not work when Squid
> tunnels traffic. Your Squid does not tunnel traffic.
> 
> 
> > Without splice/tunneling the connection.
> 
> ... and that is why it is working. If you start splicing/tunneling, it will 
> probably
> stop working.

Ok, but is not is supposed that this is the normal behaviour? 

I mean, TCP_TUNNEL = squid forward, so squid can not do nothing about the 
spliced connection.

I don't I am just a squid user... and BTW new in squid SSL intercepts.

> 
> 
> Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pools in squid4 not working with https

2018-07-11 Thread Julian Perconti
>> 
>> El ‎martes‎, ‎10‎ de ‎julio‎ de ‎2018‎ ‎18‎:‎57‎:‎43‎ ‎-03, Alex Rousskov 
>>  escribió: 
>> 
>> 
>> On 07/10/2018 01:50 PM, Paolo Marzari wrote:
>>> My home server just updated from 3.5.27, everything is working fine, but
>>> delay pools seems broken to me.
>> 
>>> Revert to 3.5.27 and delays works again with every type of traffic.
>>> 
>>> I think there's something wrong with https traffic.
>> 
>> You are probably right. A few days ago, while working on an unrelated
>> project, we have found a bug in delay pools support for tunneled https
>> traffic. That support was probably broken by v4 commit 6b2b6cf. We have
>> not tested v3.5, so I can only confirm that v4 and v5 are broken.
>> 
>> The bug will be fixed as a side effect of "peering support for SslBump"
>> changes that should be ready for the official review soon. If you would
>> like to test our unofficial branch, the code is available at
>> https://github.com/measurement-factory/squid/tree/SQUID-360-peering-for-SslBump
>> 
>> 
>> HTH,
>> 
>> Alex.
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users

I can confirm that delay_pools works fine both http and https protocols in 
squid 4 running debian 9 

Squid Cache: Version 4.1 
Service Name: squid 
 
Here the cfg: 
 
delay_pools 1 
delay_class 1 2 

delay_access 1 allow all 
 
delay_parameters 1 -1/-1 10/104857600 # ~100KBs/~100MB 
delay_initial_bucket_level 50

Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL errors with Squid 3.5.27

2018-06-28 Thread Julian Perconti
Hi all:

Finally I migrate everything to debian 9 with openssl 1.1 and squid 4 (june 
22/18) reléase (the last one).

Everything seems to go very well.

However, the dropbox client logs this error in cache.log:

kid1| ERROR: negotiating TLS on FD 35: error:141710F8:SSL 
routines:tls_process_server_hello:unknown cipher returned (1/-1/0)

squid version:

Squid Cache: Version 4.0.25-20180621-r887c98a
Service Name: squid

This binary uses OpenSSL 1.1.0f  25 May 2017. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--prefix=/usr' '--build=x86_64-linux-gnu' 
'--localstatedir=/var/squid' '--libexecdir=/lib/squid' '--srcdir=.' 
'--datadir=/share/squid' '--sysconfdir=/etc/squid' '--with-default-user=proxy' 
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' 
'--with-openssl' '--enable-ssl-crtd' '--mandir=/share/man' 
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' 
'--enable-delay-pools' '--enable-icap' '--enable-cache-digests' 
'build_alias=x86_64-linux-gnu' --enable-ltdl-convenience

openssl version and ciphers:

OpenSSL 1.1.0f  25 May 2017

ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA

squid.conf ciphers:

tls_outgoing_options 
cipher=HIGH:MEDIUM:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS

I still have not tried what happen with whatsapp from iOS (original problem)

Any ideas? Fix?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.3 assertion failed

2018-10-22 Thread Julian Perconti
Hi all,

Environment: 

Squid Cache: Version 4.3-20181014-r17614d5
Service Name: squid

This binary uses OpenSSL 1.1.0f  25 May 2017. For legal restrictions on
distribution see https://www.openssl.org/source/license.html

configure options:  '--prefix=/usr' '--build=x86_64-linux-gnu'
'--localstatedir=/var/squid' '--libexecdir=/lib/squid' '--srcdir=.'
'--datadir=/share/squid' '--with-cppunit-basedir=/usr' '--enable-inline'
'--enable-delay-pools' '--sysconfdir=/etc/squid'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-openssl' '--enable-ssl-crtd' '--mandir=/share/man'
'--enable-arp-acl' '--enable-wccpv2' '--with-large-files'
'--with-default-user=proxy' '--enable-linux-netfilter'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-icap' '--enable-icap-client' '--enable-cache-digests'
'--disable-ident-lookups' '--enable-poll' '--enable-truncate'
'--enable-heap-replacement' 'build_alias=x86_64-linux-gnu'

OS:

PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/;
SUPPORT_URL="https://www.debian.org/support;
BUG_REPORT_URL="https://bugs.debian.org/;

The error (bug?):

   assertion failed: http.cc:1530:
"!Comm::MonitorsRead(serverConnection->fd)"

Sometimes cache.log show this and squid goes down, and sometimes just it
appears in the log and squid does not stop.

I don't exactly remember when this start to happen but, I think that since
squid version >= 4.2 or 4.3

Any idea?

Thank You

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.3 assertion failed

2018-10-23 Thread Julian Perconti
> >> assertion failed: http.cc:1530: "!Comm::MonitorsRead(serverConnection-
> >fd)"
> >
> >> Any idea?
> >
> > Without the stack trace, it is difficult to say much about this bug.
> > Please collect a stack trace from the crash and post it to Squid
> > bugzilla. If the stack trace looks similar to that in Bug 4896, please
> > post it there: https://bugs.squid-cache.org/show_bug.cgi?id=4896
> >
> > If you do not know how to enable core dumps and/or how to collect a
> > stack trace, please search Squid wiki for pointers or consult your
> > local sysadmin.
> >
> 
> 
> FWIW I'm not seeing any changes in those Squid versions which relate to
> persistent connection or pipeline handling.
> 
> There is  from last
> year which still lacks any hints about what might be causing it.
> 
> 
> FYI: Details on obtaining backtrace info can be found at  cache.org/SquidFaq/BugReporting>. This issue though spans multiple
> transactions. So we will need a detailed (ALL,9 if
> possible) cache.log trace of everything that FD has been used for in addition
> to the trace. The trace itself is useful for identifying the code path and FD
> value to look at within that log.
> 
> Amos

Hi Alex/Amos

Since yesterday squid is running via this method in a cron script:

trap "rm -f $$.gdb" 0
cat <$$.gdb
handle SIGPIPE pass nostop noprint
handle SIGTERM pass nostop noprint
handle SIGUSR1 pass nostop noprint
handle SIGHUP  pass
handle SIGKILL pass
handle SIGSEGV stop
handle SIGABRT stop
run -NYCd3
backtrace
quit
EOF
while sleep 2; do
  gdb -x $$.gdb /path/to/squid 2>&1 | tee -a squid.out
done

For now (see below) squid has not generated the error / bug and I see which 
seems to be the cache.log in squid.out file.
Because sometimes squid works well for several days and other times it stays 
alive only a few hours, after a restart. (service or server restart)

When this bug happens I will let you know here and upload it to bugzilla

Thank you

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.3 assertion failed

2018-10-24 Thread Julian Perconti
> Hi Alex/Amos
> 
> Since yesterday squid is running via this method in a cron script:
> 
> trap "rm -f $$.gdb" 0
> cat <$$.gdb
> handle SIGPIPE pass nostop noprint
> handle SIGTERM pass nostop noprint
> handle SIGUSR1 pass nostop noprint
> handle SIGHUP  pass
> handle SIGKILL pass
> handle SIGSEGV stop
> handle SIGABRT stop
> run -NYCd3
> backtrace
> quit
> EOF
> while sleep 2; do
>   gdb -x $$.gdb /path/to/squid 2>&1 | tee -a squid.out done

Hi all,

After 2/3 days  Squid has crashed:

First signal:

Program received signal SIGHUP, Hangup.
0x741b00a3 in __epoll_wait_nocancel ()
at ../sysdeps/unix/syscall-template.S:84
84  ../sysdeps/unix/syscall-template.S: No such file or directory.
#0  0x741b00a3 in __epoll_wait_nocancel ()
at ../sysdeps/unix/syscall-template.S:84
#1  0x559783f4 in Comm::DoSelect (msec=)
at ModEpoll.cc:227
#2  0x5592197e in CommSelectEngine::checkEvents (
this=, timeout=) at comm.cc:1847
#3  0x55764345 in EventLoop::checkEngine (
this=this@entry=0x7fffe940, engine=engine@entry=0x7fffe6a8, 
primary=primary@entry=true) at EventLoop.cc:36
#4  0x5576452d in EventLoop::runOnce (this=this@entry=0x7fffe940)
at EventLoop.cc:115
#5  0x55764648 in EventLoop::run (this=0x7fffe940)
at EventLoop.cc:83
#6  0x557cf933 in SquidMain (argc=, 
argv=) at main.cc:1707
#7  0x556b53a4 in SquidMainSafe (argv=, argc=2)
at main.cc:1415
#8  main (argc=2, argv=0x7fffed68) at main.cc:1403
A debugging session is active.

The crash (assertion failed)

2018/10/24 09:44:29| assertion failed: http.cc:1530: 
"!Comm::MonitorsRead(serverConnection->fd)"

Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x740fb42a in __GI_abort () at abort.c:89
#2  0x5573c57c in xassert (
msg=msg@entry=0x55aec788 "!Comm::MonitorsRead(serverConnection->fd)", 
file=file@entry=0x55aeb5c5 "http.cc", line=line@entry=1530)
at debug.cc:618
#3  0x557a1118 in HttpStateData::maybeReadVirginBody (
this=0x574c6888) at http.cc:1530
#4  0x5579d3db in HttpStateData::sendRequest (
this=this@entry=0x574c6888) at http.cc:2198
#5  0x5579df50 in HttpStateData::start (this=0x574c6888)
at http.cc:2311
#6  0x559156b2 in JobDialer::dial (this=0x5db33390, 
call=...) at ../../src/base/AsyncJobCalls.h:174
#7  0x559110e1 in AsyncCall::make (this=this@entry=0x5db33360)
at AsyncCall.cc:40
#8  0x559124b1 in AsyncCallQueue::fireNext (
this=this@entry=0x56105e00) at AsyncCallQueue.cc:56
#9  0x55912819 in AsyncCallQueue::fire (this=0x56105e00)
at AsyncCallQueue.cc:42
#10 0x55764549 in EventLoop::dispatchCalls (this=0x7fffe940)
at EventLoop.cc:144
#11 EventLoop::runOnce (this=this@entry=0x7fffe940) at EventLoop.cc:121
#12 0x55764648 in EventLoop::run (this=0x7fffe940)
at EventLoop.cc:83
#13 0x557cf933 in SquidMain (argc=, 
argv=) at main.cc:1707
#14 0x556b53a4 in SquidMainSafe (argv=, argc=2)
at main.cc:1415
#15 main (argc=2, argv=0x7fffed68) at main.cc:1403
A debugging session is active.


The assertion seems to be similiar to bug 
https://bugs.squid-cache.org/show_bug.cgi?id=4896  as Alex said; but I am not 
sure.
Should I post there anyway?

Additional info: I can (now) reproduce the crash, it starts to happen when a 
client tried to connect to a server which (Chrome) returns: 
ERR_CONNECTION_RESET/EMPTY RESPONSE

I can confirm that site is not working (I tested without squid and from another 
internet access)

By other hand, what can I do with those squid's crashes?

Squid info (recompiled yesterday):

Squid Cache: Version 4.3-20181021-r17614d5
Service Name: squid

This binary uses OpenSSL 1.1.0f  25 May 2017. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--prefix=/usr' '--build=x86_64-linux-gnu' 
'--libexecdir=/usr/lib/squid' '--srcdir=.' '--datadir=/usr/share/squid' 
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-delay-pools' 
'--sysconfdir=/etc/squid' '--with-logdir=/var/log/squid' 
'--with-pidfile=/var/run/squid.pid' '--with-openssl' '--enable-ssl-crtd' 
'--mandir=/usr/share/man' '--enable-arp-acl' '--enable-wccpv2' 
'--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' 
'--enable-icap' '--enable-icap-client' '--enable-cache-digests' 
'--enable-heap-replacement' 'build_alias=x86_64-linux-gnu'

If You want I post the config file.

Hope its helps,

Thank You

___
squid-users mailing list

Re: [squid-users] Squid 4.3 assertion failed

2018-10-24 Thread Julian Perconti
> > Program received signal SIGHUP, Hangup.

Yes, I did not realized that I have executed 'squid -k reconfigure', hence that 
SIGHUP signal.

I don not know if the following is relevant but:
When the exception occurred, I had executed (earlier) 'squid -k reconfigure'. 
Then, I made a full squid stop and clean squid start, then entered the site 
that caused Squid to crash and did not happen again, I tried many times without 
reach the crash again.

> > By other hand, what can I do with those squid's crashes?
> 
> Please try the patch for bug 4864. That bug may be a duplicate of bug 4896,
> and it has an unpolished/unofficial fix:

Let me know if I patched correctly:

~ (...)/squid-4.3-20181021-r17614d5# patch -p1 < 
/etc/squid/debug/patches/SQUID-385-Comm_MonitorsRead-assertion-t3.patch
patching file src/FwdState.cc (without errors nor warnings)

make/make install and start Squid from systemd (i.e.: systemctl start 
squid.service) instead of gdb.

And now when I try to connect to site that made squid to crash, I get an error 
page from squid (instead of chrome page, like before) telling the same thing 
with different words: "(...): (104) Connection reset by peer". And Squid does 
not crash.
Even doing a squid -k reconfigure and access again to that site, does not crash.

Anyway I am going to run squid via gdb again, and wait for a similar crash.

>   https://bugs.squid-cache.org/show_bug.cgi?id=4864
> 
> Please update bug 4896 to let us know whether bug 4864 fix worked for you
> (including cases where the posted patch no longer applies).

I'll let you know if the patch definitely solves the bug , after test it few 
days.

Thank You!

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid crashes with "!Comm::MonitorsRead(serverConnection->fd)"

2018-11-07 Thread Julian Perconti
> > 2018/11/07 12:41:45 kid1| assertion failed: http.cc:1530:
> > "!Comm::MonitorsRead(serverConnection->fd)"
> 
> IIRC, there are relevant bug reports in bugzilla.
> 

Hi, 

See if this helps or its similar to your case: 
https://bugs.squid-cache.org/show_bug.cgi?id=4896

If your squid's crash is similar, there is a patch as a workarround.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-19 Thread Julian Perconti
I reply to myself due to a bounce and I have to re-enable the membership to 
list at least 3 times at month. 
Maybe a problem with Yahoo.

>>> Alex: After a splice rule is applied, SslBump is over. No  more rules are
>>> checked. No more loops are iterated. Squid simply "exits" the  SslBump
>>> feature (and becomes a TCP tunnel).

OK, that is what makes me a noise, and therefore I asked about what you said.

>> What about the meaning of the ACL's at step1 when splice?
>
>* If the splice rule ACLs match, the splice rule is applied. In that
>case you can consult my statement above.
>
>* If the splice rule ACLs do not match, then the splice rule is not
>applied. My statement above explicitly does not cover this case -- it
>starts with "after a splice rule is APPLIED".
>
>
>> e.g.:
>> There only these two rules for ssl_bump statements:
>>
>> ssl_bump splice sitesAB
>> ssl_bump splice SitesCD
>
>> I guess that here, Squid has to do 2 loops at outer/main loop to
>> evaluate step1 twice, due to rules differs (sitesAB and sitesCD ACL)
>> and see if both match to splice.

I think that I made a mistake in above sentence. 
I have should said "(..) Squid has to do 2 loops at inner while he is at the 
main loop (at SslBump1)"

>I do not know why you are guessing instead of carefully applying the
>already documented procedure, but you guessed wrong. At any step, the
>first matching rule is applied. For example, if sitesAB matches, then
>Squid splices without checking the second (i.e. SitesCD) rule.

Well, I am guessing because many things are not completely clear to me and/or 
easy to understand, at all. I am new in TLS filtering.
For example I never would think that in the given example, the second rule 
(sitesCD) will not never be checked later.
I asked or write that example with the inner loop in mind; I'm sorry.

>> Are You (perhaps) talking about the examples in the thread and not what 
>> happens "in general"?
>
>My statements above are general except the "For example..." sentence
>that refers to your specific example.

Its good to know.

>> In which case the "noBumpSites" ACL could have not match? I mean if I
>> tell a Squid: "splice at step1 this.site.net" How that matches can
>> fail?
>
>Roughly speaking, the server_name ACL matches at step1 when the real or
>fake CONNECT Host information match one of the configured server names.
>
>For example, if you are intercepting or if the real CONNECT request
>contains an IP address (rather than a host name), then the server_name
>ACL matches if the reverse DNS lookup for that IP address is successful
>and matches at least one of the configured server names. In other cases,
>the ACL does not match during step1.
>
>The reality is more complex than the above rough summary because domain
>name comparison is a complex algorithm. Consult the latest Squid
>documentation for details. Also, please do not forget that step2
>matching adds checking TLS client SNI name, and step3 matching adds
>checking certificate Subject names. It gets really complex...
>
>For example, the Host header of a CONNECT request may not be the same as
>the TLS client-supplied SNI name, and/or the server certificate subject
>name may. These differences (and other random factors like DNS
>inconsistencies) may result in the server_name ACL match result changes
>across the steps.
>
>Modern Squids have additional server_name options that control some of
>the matching nuances discussed above.

That's what I imagined you meant (and worried too) -without any kind of 
knowledge-. And now you have just confirmed it. 
So things become a little more delicate. 
And *now* I understand why you have done so much emphasis saying: "If the rule 
match..."

>Alex.

Thank You.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-19 Thread Julian Perconti
>After a splice rule is applied, SslBump is over. No  more rules are 
>checked. No more loops are iterated. Squid simply "exits" the  SslBump 
>feature (and becomes a TCP tunnel).

How is that? What about the meaning of the ACL's at step1 when splice?

e.g.:
There only these two rules for ssl_bump statements:

ssl_bump step1 splice sitesAB
ssl_bump step1 splice SitesCD

I guess that here, Squid has to do 2 loops at outer/main loop to evaluate step1 
twice, due to rules differs (sitesAB and sitesCD ACL) and see if both match to 
splice. 
Probably this example does not make sense: "Why don't use just 1 ACL instead 
2"? But it is an example to understand and fix ideas.

Are You (perhaps) talking about the examples in the thread and not what happens 
"in general"?

> If noBumpSites matches at step2, then, yes, Squid will splice at step3 
> by default. Otherwise, no; Squid will bump at step3 by default.

[... ]

You mentioned that explanation two times.
The question (maybe obvious) is: In which case the "noBumpSites" ACL could have 
not match? I mean if I tell a Squid: "splice at step1 this.site.net" How that 
matches can fail?
Maybe you refered in the case that a site is just not listed in the ACL.

> >   ssl_bump splice noBumpSites # This line reachs a splice rule at step1
> >   ssl_bump stare
> 
> > Squid is telling to the client: "I will not touch any TLS byte. 
> > [...] I will do as many checks as possible then You will be connected..."
> 
> The configuration above does not match your summary because the 
> configuration has a "stare" action that may run at (step1 and) step2 
> (and, hence, a possibility of the bump action at step3). Staring at
> step2 and bumping (at any step) modify TLS bytes, of course.
> 
> Perhaps your summary only applies to the cases where noBumpSites 
> matches (either at step1 or at step2), but the summary did not make 
> that clear.

Here borns more ore less the same doubt like above and the final one.

> There is a big difference between explaining Squid actions for a 
> particular transaction and summarizing what a particular configuration 
> means (for all transactions). Unless noted otherwise, I am focusing on the 
> latter.
> 
> AFAICT, the primary difference between
> 
>   ssl_bump peek noBumpSites
>   ssl_bump stare
> 
> and
> 
>   ssl_bump splice noBumpSites
>   ssl_bump stare
> 
> is that the former requires a noBumpSites match at step2 for the 
> connections to be spliced.

Yes. The condition you say is mandatory but, again: Why that requirement could 
fail/no-match?

Thank You for the patience

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-21 Thread Julian Perconti
Hi all.

I will go (finally) with this sslBump config. Although I still have some 
doubts...
I think that It´s time to finish this thread.

#  TLS CFG
acl noBumpSites ssl::server_name_regex -i "/etc/squid/url.nobump"

# steps ACL
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

#  SslBump actions
ssl_bump peek step1
ssl_bump splice noBumpSites
ssl_bump stare step2

The TLS config "explained" as  well as I can understand:
*Clarification*: maybe I will quote some words out of context; but Alex told me 
that he almost always speaks "In general terms" about what Squid does.

# First rule:
ssl_bump peek step1 #  Step 1 is the only step that is always performed.

If I no peek at step1, and instead directly splice, happens what the wiki warns 
(this was checked):

" Bump All Sites Except Banks
" Usually does not work for requests that go to non-banks -- they will not be 
bumped." (Verified)
" Depending on other settings, Squid may terminate connections to banks if 
Squid cannot validate client SNI (Host header forgery detection) or the server 
certificate."
The wiki example about this warn the config is:
  ssl_bump splice serverIsBank
  ssl_bump peek all
  ssl_bump bump all

So my conclusion is: "It's "better" (to avoid: ...not work for requests that go 
to non-banks) to peek step1"

# Second rule:
ssl_bump splice noBumpSites 

Here a doubt, I'm sorry.
Based on above words and the squid behaviour I mentioned, I think that this 
rule should implicity match only at step2.

Alex words: 

>"So, "yes", Squid only executes the first rule action _when_ the first
>rule action is applicable and its ACLs match at every step, but, "no",
>Squid does not make a bunch of steps with only the first rule in mind."

With the overall logic in mind, the first impression is that the second rule 
could match at step1 and at step2 too. Like this rule would the first one (but 
is the second).
However as I said above if the splice is the first rule instead the peek, the 
squid´s behaviour changes.

>After a splice rule is applied, SslBump is over. No  more rules are 
>checked. No more loops are iterated. Squid simply "exits" the  SslBump 
>feature (and becomes a TCP tunnel).

Here, probably (not sure) Alex rerefered here to "splice all" rule. In that 
case is clear "splice is a final action" then no more future checks.
"Actions splice, bump, and terminate are final actions: They prevent further 
processing of the ssl_bump rules."

But in my config next to splice there is an ACL. That is why I asked: "But, 
doesn't the ACL matters?" in earlier mail.

Therefore, due to above Alex´s statement:  Will Squid ignore the last rule?
I checked that the answer is no. If I remove the last rule (stare step2) all 
the traffic is spliced.
I think that the reason is: (explicit) peek step1 >  (implicit) peek step2 > 
result: default splice all. (peek at step2 precludes future bumping)
Even more, if I remove the last rule, the second rule I think that will be 
ingnored. In reallity will had not make sense.

# Third/last rule:
ssl_bump stare step2 # stare at step2 so implicit and "secure" default bump 
action at step3.

Probably I said something (or all) that is WRONG.

Thank You.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-22 Thread Julian Perconti
> > # Second rule:
> > ssl_bump splice noBumpSites
> >
> > I think that this rule should implicity match only at step2.
> 
> I do not know what "implicitly match" means here, but yes, the splice rule
> may only match at step2 in this configuration:

When I say "implicit" I want to mean that there is no any step specified in the 
rule.

> * It cannot match at step1 because the earlier "peek" rule matches at step1.

Yes, rule #1 "matches all" therefore the domains into "noBumpSites" ACL are 
also peeked. And, that first rule will always match.
 
> * It is always reached during step2 because no rules above it can match
> during step2.

Yes, first rule has an explicit peek at step1, hence it is impossible any kind 
of match at step2 before the 2nd rule or in the first rule.

>Whether it matches during step2 depends on whether
> noBumpSites matches a particular transaction during step2.

If I understood You correctly, I think that here you are pointing to an earlier 
message where You explained some reasons why the "noBumpSites" could not always 
match.

> * It cannot match at step3 because for a splice rule to match at step3 a peek
> rule has to match at step2, and there is no peek rule that can match at step2
> in your configuration.

Althought there is no any peek rule at step2, in the second rule a final action 
is applied to noBumpSites (if match)
In fact, in case that (for any reason) the 2nd rule can not match, there is a 
explicit stare rule at step2.
So, I think that its almost impossible that splice at step3 happens in this 
configuration for the noBumpSites.
In the worst of the cases, if at rule #2 no match, then noBumpSites will be 
bumped, due to stare at step2.

Is this reasoning correct?

> > However as I said above if the splice is the first rule instead the
> > peek, the squid´s behaviour changes.
> 
> Naturally. If you place the splice rule first, it may match during step1 as 
> well.
> If you do not, it cannot.

That was a comment to confirm that the wiki doc warn and said at 2017's is what 
is  happening now with Squid 4.2 (18/9 source).

> >> After a splice rule is applied, SslBump is over. No  more rules are
> >> checked. No more loops are iterated. Squid simply "exits" the
> >> SslBump feature (and becomes a TCP tunnel).
> 
> > Here, probably (not sure) Alex rerefered here to "splice all" rule.
> 
> I think you are ignoring or misinterpreting the verb "applied". Here, 
> "applied"
> means Squid has executed the rule action. Not just considered the rule
> containing that action, but actually ran that action. Applying a rule action
> implies that the rule ACLs (whatever they were) matched, of course. A rule
> action is never applied when the rule ACLs do not match.

Yes, I misinterpreted You more than one time; I'm sorry. (Because You are 
speaking in english, and I am reading/speaking in an "almost-english" as well 
as I can)
So, In the final action the ACL is important. This is what I tried to mean.
I insist, because when You said that, I thought (without understanding the 
logic): "OK, therefore if I splice at step2 some.site.net, the following lines 
are over; no more processing no matter whatever any ACL the rule has"

> > In that case is clear "splice is a final action" then no more future checks.
> 
> The notion of a "final" action does not depend on rule ACLs.

Here is where I your explanation breaks my head. Here is the most important 
confusion of all of my own other confusions/misunderstanding.

In the config I posted, there is a splice action in the middle, and only the 
"noBumpSites" are spliced (at least checked with logs).
And even with the splice action as second rule, the 3rd rule is processed 
(Squid is still processing rules after splice noBumpSites ACL).
It is checked because if I remove the last rule all the traffic is spliced (due 
to peek at step1 and splice at step2) and future defaults actions.
I think that this happens because, if there is no 3rd line stare'ing at step2, 
so:
  
   ssl_bump splice noBumpSites = ssl_bump splice noBumpSites all. (not sure, I 
will do a test with only one rule, to see what Squid does: ssl_bump peek step1)

If this were the last rule, but in this configuration there is a 3rd rule which 
is stare'ing at step2)

> After Squid applies the "splice" action (in whatever context, for whatever 
> reason),
> SslBump processing for that transaction is over. Same for "bump" and
> "terminate" actions.

What do You exactly mean with "for that transaction"? Maybe that rule?
 
> > But in my config next to splice there is an ACL. That is why I asked: "But,
> doesn't the ACL matters?" in earlier mail.
> 
> ACLs (and other things) determine which rules match. After a rule matches,
> then Squid applies its action, and then the notion of a "final action" starts 
> to
> matter.

Thats statement clarify the thing a bit more.

> > Will Squid ignore the last rule?
> 
> No. The last rule will be applied at step2 whenever noBumpSites mismatches
> at step2.


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-26 Thread Julian Perconti
> > When I say "implicit" I want to mean that there is no any step specified in
> the rule.
> 
> Understood. Please avoid that word usage. In this context, implicit means
> "without being configured" or "by default". One could say that "default rules
> implicitly match", or that "a rule without any ACLs matches implicitly", but
> one cannot say that "rule X containing ACL Y implicitly matched".

OK and sorry for that.

> > Althought there is no any peek rule at step2, in the second rule a
> > final action is applied to noBumpSites (if match)
> 
> Final actions at step2 do not matter when we are talking about what happens
> at step3. If a final action is applied at step2, there is no
> step3 as far as an ssl_bump configuration is concerned.

Yes, when a final action is applied at step2, ssl_bump rules are over when the 
previous step (if it's a final action) has matched.

> It is impossible for any transaction to be spliced at step3 with this
> configuration. Whether the transaction matches or does not match
> noBumpSites at any given step is irrelevant for this statement.

OK: In this configuration it is impossible any kind of splice at step3; but not 
for step2. 
In fact, noBumpSites are being spliced (at least I can see the TCP_TUNNEL in 
logs).

> Correct. There is nothing "worse" about this case though.

With the  term "worst" I wanted to mean that my intention is splice sites into 
the ACL (noBumpSites) , not bump.

> > Here is where I your explanation breaks my head. Here is the most
> > important confusion of all of my own other
> > confusions/misunderstanding.
> 
> Final actions are "bump", "terminate", and "splice". As you can easily see, 
> this
> statement does not depend on ACLs.
> 
> An action is either final or not, by that action nature/definition. ACLs are 
> one
> of the precondition for applying an action, but ACLs do not affect action
> "finality".

Well, Yes.
Strictly speaking final actions (and maybe any action) do not depend on the 
acl, let's say it is a natural function/behavior of Squid beyond any acl.
However, when a final action is present in a rule and that rule contains an 
ACL, the final action will apply to that ACL. At least that is the behaviour I 
see.
If not, Squid would not being splice'ing "noBumpSites" which is an ACL; as he 
is doing right now.
I say good?

> > And even with the splice action as second rule, the 3rd rule is
> > processed (Squid is still processing rules after splice noBumpSites
> > ACL).
> 
> An action presence in the rules does not, on its own, stop Squid from
> processing lower rules. *Applying* a final action does.

So, why squid process the last rule which stare at step 2? He already applied 
the splice to the ACL sites.

> >> After Squid applies the "splice" action (in whatever context, for
> >> whatever reason), SslBump processing for that transaction is over.
> >> Same for "bump" and "terminate" actions.
> 
> > What do You exactly mean with "for that transaction"? Maybe that rule?
> 
> No, I do not mean "that rule". In this context, "transaction" is, roughly
> speaking, an "HTTP CONNECT request" or "TLS connection". An applied final
> action stops all ssl_bump processing for the corresponding
> transaction/request/connection, and not just one ssl_bump rule processing.
> That difference is why those actions are called "final".

OK, thank You for that clarification of misinterpreted terms.

So going back to current config:

  ssl_bump peek step1
  ssl_bump splice noBumpSites # I think that here the splice action is applied 
at step2. Even if there is no step specified. And due to previous rule.
  ssl_bump stare step2

Due to I think that: the splice action happens at step2 (more checks?), and not 
at step 1 (less checks); This is the config the one of best fit to my 
necessities.

Quick reminder of the idea/need:
In the most possible "secure" way: bump to all except banks and other sensitive 
sites; and less possible interference to that sensitive sites.

Just a comment: Squid is working fine,but still the cache.log shows these kind 
or errors (quite annoying): 

kid1| Error negotiating SSL connection on FD 26: 
error:0001:lib(0):func(0):reason(1) (1/-1)
kid1| ERROR: negotiating TLS on FD 31: error::lib(0):func(0):reason(0) 
(5/-1/104)

Always or almost always are only those two types of error [reason(1) (1/-1) - 
(0) (5/-1/104)]

But that is another story. Apart nobody reports problems about the browsing.

> Alex.

Thank You, again.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-21 Thread Julian Perconti
> > I will go (finally) with this sslBump config. Although I still have some
> doubts...
> > I think that It´s time to finish this thread.
> 
> I am confused because "you think it is time to finish this thread" but you are
> asking new questions. Please clarify, do you want answers to the questions
> in your last email? all right. If you think that at this point it is worth it 
> then do it.

Yes maybe I was contradictory.
All right. If you think that at this point it is worth it then do it.
Since along the thread I draw many wrong conclusions.
It's my fault, not your fault.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-12 Thread Julian Perconti
> I am afraid you do not. You are probably missing the fact that, at each step,
> the rules after the matching applicable rule are not checked.
> Also, you seem to insert some implicit peeking rules that are never there.
> Finally, there may be some confusion regarding how multiple ACLs on one
> line are evaluated (and/or you do not think that stepN is just an ACL?).

You're right, it's just and ACL like any other. Maybe I lost sight of that 
point.

> Details below.

I will keep trying to understand the best I can.

> > ssl_bump peek step1
> > ssl_bump peek noBumpSites
> > ssl_bump stare all
> 
> 
> >>   ssl_bump peek step1  # implicit "all" at step1
> 
> Yes, if you wish to think about it that way. In reality, the condition
> is exactly "step1", rather than "step1 and all" or "step1 and true".
> 
> 
> >>   ssl_bump peek noBumpSites # As there no step specified, squid match
> at any step
> 
> Not exactly. Squid will evaluate this rule at any step that (a) reaches
> this line and (b) where the peek action is applicable. The intersection
> of those two preconditions is "step2" rather than "any step".

Ok, say that the most (not to say the *only*) important beyond any step or 
action is the *secuential order -line_by_line-* of the rules (steps) .

Example:

  ssl_bump splice noBumpSites # this will be totally ignored by Squid if a 
stare rule precedes this.

i.e.:

  ssl_bump stare noBumpSites # No matter what, here is the Squid first match 
and he is at step1...
  ssl_bump splice noBumpSites # ...Therefore here Squid is at step2, then this 
line will never match, even not having specified the step in both lines, 
because "noBumpSites" was already stared at first line.

Well, I am not really sure about the above example (Maybe I choosen the 
worst).. if I a read what the Actions do at wiki, appears doubts in mind, it's 
just an example about how implicit steps works.

Anyway, as an excercise I guess that in this example what Squid will do is a 
final "splice noBumpSites" at step2, because stare action always match at step1 
(and at wiki, peek/stare description are the same)

I can not realize right now about what will happen at step3 or SslBump3.. guess 
that there will never be a bump, not sure.
*BUT* if in case that an implicit stare occurs at step2 due to first line, then 
squid will bump the "noBumpSites" and never-match/ignore the second line 
completely.

> > then this line, match at step1

"This line" was ssl_bump peek noBumpSites

> No, this line will not be evaluated at step1. Only the first rule is
> evaluated at step1 (because that first rule always matches at step1).
> 
> 
> > and then at step2, so when a match occurs at step2 it precludes future
> bumping of the sites listed in the ACL.
> 
> Yes, but that is kind of irrelevant here because there are no bump rules
> to exclude. At step3, this previous/step2 peeking should result in Squid
> applying the default "splice" rule (you can view that as excluding the
> default "bump" rule if you wish).

Yes, that's the idea, default/implicit bump all, except the "noBumpSites", but 
maybe is not the best way to do that.

> >> ssl_bump stare all # Here there is either no step2 (and any step)
> >> specified but, because in the previous line You has (implicitly)
> >> peeked at step2, the stare'ing not (or can´t) applies to sites
> >> listed in ACL (they were peeked at step2).
> 
> Something like that. Step2 always happens in this configuration (so "no
> step2" does not make sense), and there is no such thing as "implicit
> peeking", but I think you more-or-less got the right idea here.

I didn't know that no exists "implicit peeking" as you said above. Instead, I 
always thought that peeking was mandatory.
Resume: Implicit splice and bump exists aalways exists. Implicit peek, no. Is 
this correct?

See my doubt at the end...and conclusion.

> >> ssl_bump peek noBumpSites # Like previous example, but..I guess
> >> that as there is no "all" explicit, this line do a "peek all at
> >> step1" (implicitly)
> 
> No, this line does not do "peek all". It does "peek noBumpSites". That
> is, it tells Squid to peek when and only when both of the conditions
> below are true:
> 
> (a) the peeking action is applicable (i.e., step1 or step2)
> (b) the noBumpSites ACL matches
> 
> The two conditions above are evaluated in the specified order. Condition
> (b) is not evaluated unless condition (a) is satisfied.

Another important point to keep in mind what your are telling above.

> > To clarify, if I would add an "all" at the end of this line, then all 
> > traffic would
> be spliced.
> 
> Adding "all" to any line changes nothing as far as line matching is
> considered. The value of "foo and true" is equivalent to the value of "foo".

So the word "all" makes sense if its is "alone"? Or not even like that?
e.g.: ssl_bump peek step1 all = ssl_bump peek step1, *always*?

> I am not sure I interpret your definition correctly, but I hope the
> following statements will answer 

Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-17 Thread Julian Perconti
> > So, when squid reaches this first rule and line (there is no explicit
> > step)  ...does Squid make a "bucle of steps" only along the first line
> > and go to next line only when the rule stop being
> > applicable/matchable?
> 
> I hesitate answering that question with a simple "yes" or "no" because any
> such answer is likely to mislead folks reading this email.

Well yes, and I hope that this thread helps to others.

> 
> The overall logic is like this:
> 
>   for each step
>   do
>   for each rule
>   do
>   if the rule action is possible and the rule ACLs match,
>   then perform the rule action and either go to the next
>   step or, after applying the final action, exit
>   done
>   apply the default action and exit
>   done

Well, this explanation merits to copy it and paste into squid.conf as a comment.

Let me know if I understand what Squid does with the rules of SslBump through 
this logic:

>   for each step
>   do # This loop will execute as maximum up to three times; because there are 
> 3 steps in the entire SslBump environment.

>   for each rule
>   do # ...and this loop, will execute as many times as the amount of the 
> rules the config has.

Probably my interpretation of the nested loop is wrong.

Now, How does Squid takes  and retains decisions when the steps are 
implicit/explicit throught the rules? The developers knows the details.

> > He is being a passive observer of that TLS traffic.
> 
> Squid also validates what it observes/forwards. And there is also TCP/IP
> traffic before (and around) TLS traffic.

OK, so I will peek, instead of splice at step1 and step2; and the final action 
will be splice and it will happen at step3; the step where the final actions 
are always taken.

I think that splice at step1 does not make sense according to the doc. and also 
to the order of steps or the sequence, about how the rules are evaluated.
By other hand, lets squid to do more checks even if a sites will be spliced: to 
do that, as I said above, I (think that) have to peek instead splice at step1 
and step2. 

Even more, about the step3 the squid doc. says: 
  I: Get TLS Server Hello info from the server, including the server 
certificate.
  II: Validate the TLS server certificate.

Finally, the thing that really does not makes sense is splice at step1 and then 
splice at step2:
Acording to squid doc.: "step2/step3 is only performed if a peek or stare rule 
matched during the previous step." (not a splice rule)

> > Here, I am talking about the idea of (explicitly) splice at step1 and then 
> > at
> step2 of a white list of sites.
> 
> If you splice at step1, then the number of validations that Squid does would
> be smaller (possibly zero, not sure) than if you splice at step2.

Again, following the documentation: "Step 2 is only performed if a peek or 
stare rule matched during the previous step." So, Is "correct" to splice at 
step1 or step2?

> The devil is in the details:

Always.

> * A key detail here is determining whether the intended site _is_
> "really/special sensitive". For example, the intercepted client is connecting 
> to
> b::a:d IPv6 address while claiming in the TLS Hello that it is trying to get 
> to
> sensitive.example.com. Should Squid trust the intended destination IP
> address or the TLS SNI? Or should we wait for the server to identify itself
> with a valid SSL certificate? Etc.

From the "security side" I think that the second option. "...wait for the 
server to identify (...)"

Therefore, I think that as is "more secure" bump at step3 then should be more 
secure splice at step3 too.

> * The other key detail is what should happen when that sensitive site refuses
> to communicate with Squid or otherwise misbehaves. Should Squid, for
> example, simply close the browser connection, making it more likely that the
> user (or their admin) blames the proxy? Or should Squid bump the browser
> connection to explain what has happened, creating all the headaches
> associated with bumping.

Do not what to say at this point. Maybe I am missing something...

> Your Squid configuration should reflect all these key decisions.
> If Squid does not have enough configuration options or code to do exactly
> what you want, then you (or others) can always add more code/options. If
> your use case is common/general enough, then quality implementations of
> those additional features should be officially accepted.

Well, let me show You, what is my *second final* config to do the the most 
approximate actions I want Squid do and who has read the thread knows, taken as 
a conclusion of the thread:

Telling to Squid what exactly he has to do at each step explicitly:

   ssl_bump peek step1 noBumpSites # at step1 peak or stare do the same, but 
Amos says that stare alters "the letters" while peek no.
   ssl_bump peek step2 noBumpSites 
   ssl_bump splice step3 noBumpSites # This is probably reduntant
   ssl_bump stare step1 # Maybe It 

Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-13 Thread Julian Perconti
> > Example:
> >
> >   ssl_bump splice noBumpSites # this will be totally ignored by Squid if a
> stare rule precedes this.
> 
> No, this is incorrect. There are many cases were a previous stare rule will 
> not
> have the effect you state it will. For example:
> 
>   # Squid may splice at step2 despite the preceding stare rule
>   # because staring at step1 does not preclude splicing.
> 
>   ssl_bump stare step1
>   ssl_bump splice noBumpSites

Well yes, I think You are right; but my example (or what I wanted to mean) was: 
-Maybe You have post that to give an example about how that rule could 
probably, not match, I don't know-

   ssl_bump stare noBumpSites  (at this line your example said: ssl_bump stare 
step1)
   ssl_bump splice noBumpSites

...And here appears a "key-question":

   ssl_bump stare noBumpSites # This is the first line of SslBumps ruleset.

So, when squid reaches this first rule and line (there is no explicit step)  
...does Squid make a "bucle of steps" only along the first line and go to next 
line only when the rule stop being applicable/matchable?
If the answer of my question is: "Yes" then the second line has not any effect 
because, I guess that squid will do a bump in more-or-less this way:

   ssl_bump stare  noBumpSites

... is the same as:

   ssl_bump stare  step1 noBumpSites
   ssl_bump stare  step2 noBumpSites # Here is where he second line stops 
making sense
   ssl_bump bump  step3 noBumpSites # Finally bump due to the previous step

Thus:

   ssl_bump splice noBumpSites # will never matchs.

Going a bit to the past, Amos explained the following when I asked:

>> ...So that means that squid processes the SslBump directives:
>> 1: maybe more than one time in a single request...?
>> 
>Yes. Up to 3 times. A peek or splice action causes another check later.

Well, Amos never mentioned a "stare" action here, so I dont know I a "stare" 
applies to this too. 
And even worse, maybe I did not understand him correctly.

>   # Squid will splice at step1 despite the preceding stare rule
>   # because the preceding stare rule never matches
>   ssl_bump stare !all
>   ssl_bump splice all

And this example is more obvious than the first one. It is like that previous 
line would not exists.

(...)

> > Does not the splice at step1 and step2 action avoid this? I mean if
> > squid act as a -TCP forward proxy only- for noBumpSites. "Don't touch
> > TLS bytes"
> 
> I am not sure what you mean by "this" exactly, but splicing (at any
> step) does not guarantee the lack of errors. 

Ok, but is Squid the culprit of those error? He is being a passive observer of 
that TLS traffic.
Here, I am talking about the idea of (explicitly) splice at step1 and then at 
step2 of a white list of sites.

Question based on words below:

>>>* If successful, ssl_bump peek and splice actions do not alter TLS
>>>bytes. Peeking and/or splicing Squid can be viewed as a TCP proxy as far
>>>as TLS bytes forwarding is concerned. The client and the origin server
>>>will see the same TLS bytes they would have seen if Squid was not there.
>>>
>>>* In this scope, various errors are usually equivalent to applying the
>>>"bump" action.

>The earlier you tell Squid to
> splice the connections, the fewer checks Squid will do, decreasing the
> probability of an error.

That is the idea with the noBumpSites ACL, the least amount of errors possible.

Lets say: "Let's remove as much responsibility as possible to Squid about what 
happens with really/special sensitive sites, If something goes wrong"
Talking with Squid/In other words: "Squid, do a *full* bump to msn.com and 
youtube.com too; but please *never do not nothing neither touch nothing*  with 
bankaust.com.au 
(Some like that)

> Errors lead to bumping the client connection (to
> deliver the error message).

What do You mean about those errors?

Thank You

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-12 Thread Julian Perconti
> > So, in a brief the confi is:
> >
> > ssl_bump peek step1 all
> > ssl_bump peek step2 noBumpSites
> > ssl_bump stare step2 all
> 
> ... which should be equivalent to an even simpler config:
> 
>   ssl_bump peek step1 
>   ssl_bump peek noBumpSites
>   ssl_bump stare all

Yes, i've tested and squid log shows the same. So,it appears that squid takes 
the same actions.

> 
> ... which, for many reasonable definitions of noBumpSites (that match during
> step1 if and only if they should match during step1), can be simplified even
> further:
> 
>   ssl_bump peek noBumpSites
>   ssl_bump stare all

Same as above, this "compact" config seems to be the same as the three above 
bump ruleset.

Please, let me know if I understand why those cfg are equals or equivalent to 
config I've posted as a "final one".

First alternative difference:

>   ssl_bump peek step1  # implicit "all" at step1
>   ssl_bump peek noBumpSites # As there no step specified, squid match at any 
> step then this line, match at step1 and then at step2 , so when a match 
> occurs at step2 it precludes future bumping of the sites listed in the ACL.
>   ssl_bump stare all # Here there is either no step2 (and any step) specified 
> but, because in the previous line You has (implicitly) peeked at step2, the 
> stare'ing not (or can´t) applies to sites listed in ACL (they were peeked at 
> step2).

Second alternative difference:

>   ssl_bump peek noBumpSites # Like previous example, but..I guess that as 
> there is no "all" explicit, this line do a "peek all at step1" (implicitly) 
> and at step2 sites listed in ACL are being peek'd. To clarify, if I would add 
> an "all" at the end of this line, then all traffic would be spliced.
>   ssl_bump stare all # There is no change between this line in both configs 
> you've posted, So my "explanation" would be the same as of the "First 
> alternative"

> However, please note that the three configs above implicitly rely on Squid
> splicing (or bumping) at step3 because of the previously matching
> step2 peek (or stare) action and the lack of an explicit step3 rule.
> Whether Squid v4.2 actually does what it should be doing, I do not know.

Answered; squid "automagic " are working as spected. (Squid Cache: Version 
4.2-20180902-r6d8f397)

> 
> > 1: Is this peek-n-splice ruleset insecure?
> 
> Define "secure".

Well, is not the same if there is a squid-TLS (in the LAN) between a client and 
sensitive external server when a TLS connection is being established as if 
there is nothing between they.

In this sense I would like to know how could I interference as less as possible 
with the squid in the middle when someone is accesing to a site that I wish not 
to bump.
Or let the less quantity of security holes as possible.

> > 2: It is correct to say that those lines are not necessary/redundant?
> 
> They should be redundant, but I do not know whether Squid v4.2
> implements this aspect of the specs correctly. I know that there were related
> implementation bugs in some Squid v3 releases. You can test and, if needed,
> file a bug report.
> 
> 
> > (#ssl_bump splice step3 noBumpSites/#ssl_bump bump step3 all)
> 
> Please note that the meaning of your noBumpSites ACL changes from one
> step to another (because it gets more/different info). Thus, it is incorrect 
> to
> say that
> 
>   ssl_bump peek step1
>   ssl_bump peek step2 noBumpSites
>   ssl_bump splice step3 noBumpSites
>   ...
> 
> is always exactly equivalent to
> 
>   ssl_bump peek step1
>   ssl_bump peek step2 noBumpSites
>   ssl_bump splice step3 all # should be optional
>   ...
> 
> When using the first configuration, it is possible that, in some specific 
> case,
> noBumpSites matches during step2 but does not match during step3, and
> Squid proceeds to evaluating the remaining "..." rules in that specific case.
> Such sequence of events is not possible in the second configuration because
> splicing at step3 is unconditional there -- it does not rely on noBumpSites
> matches during step3.

OK, thanks to clarify that.

Last question.

When I do this:

ssl_bump splice noBumpSites
ssl_bump stare all

It is supposed that in this config I am (guessing), implicity, peeking  
(first?) and splice at any step and bumping (implicity) at step3 sites that 
does not match with whitelist by staring at step2. Maybe something like that, I 
dont know.

The thin is that I see in the logs a tunnel but, instead an IP address it shows 
a domain. (TCP_TUNNEL www.dropbox.com:433) *and* a security ALERT. Saying that 
there is no IP that match with the xyz.net domain, or some like that.

So, taking into account the needs that I have already mentioned, what is the 
way I should take?

> HTH,

Always helps.

Thank You!
 
> Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About SSL peek-n-splice/bump configurations

2018-09-18 Thread Julian Perconti
> Both loops can finish "early" (i.e. before three steps and/or before all
> configured rules are evaluated).

Yes, maybe I would have should say at least: "Well in really, depend on the 
rules.." Especially in the inner loop.
But I pointed to the maximum possibilities. (if exists)

> Just to avoid misunderstanding: Final actions may be taken at any step, but
> only final actions are possible at step3.

Good point.
My mistake, I forgot that. 
In fact, in the actions table its clear that a final action like terminate can 
occurs at any step and even worst, any action can occurs at step1.
From another point of view: at step3 only final actions are allowed.

> > I think that splice at step1 does not make sense according to the doc.
> > and also to the order of steps or the sequence, about how the rules
> > are evaluated.
> 
> I do not know what you mean. Splice at step1 is certainly possible and even
> recommended for known non-TLS traffic.

Idem; same comments as above.

> > the thing that really does not makes sense is splice at step1 and then 
> > splice
> at step2:
> 
> It is not possible to splice twice. Splicing is one of the final actions. No 
> other
> action follows a final action (by definition). Search for the two "exit" 
> words in
> the loop summary to find where final actions may be applied.

So, if a rule "x" match a splice action at inner loop when the outer loop 
starts, then take the final action for the rule "x" and if no there is no more 
rules at step1 exit and proceed to evaluate the rules for the step2.
Some like that?

> The best thing to do depends on your goals and the transaction. Splicing at
> step1, step2, OR step3 makes sense in some cases and does not make sense
> (or is impossible) in others.
> 
> You need to evaluate your rules in the context of a specific transaction
> though: The same set of ssl_bump rules may splice transaction A at step1 and
> transaction C at step3. The loops summarized above are executed from
> scratch for every transaction that reaches ssl_bump directive evaluation.

I lost You here.

> It is impossible to make "splice or bump" decision at step3 because splicing 
> at
> step3 requires peeking at step2 while bumping at step3 requires staring at
> step2. In a context of a single transaction, it is impossible to both peek and
> stare at the same time!
> Thus, you essentially have to make that "splice or bump" decision earlier, at
> step1 or step2, when you have less information than you would have at
> step3. It is almost like the dominant quantum physics theory -- by measuring
> at step2, you determine the outcome of that measurement (i.e. available
> actions at step3).

Wait, maybe I do not explain myself well or I don not understand what do You 
want to mean; the ACL at every step are not the same. See below.

> >ssl_bump peek noBumpSites
> >ssl_bump stare

It suppsed that here due to acl in the first line, squid will bump later, all 
except sites that matches the acl.

> > But, what happen if Squid decides automagically wrong? Or something
> does not match...?
> 
> I do not know what you mean by "Squid decides automagically wrong"

Well, it was just an (probably bad one) idea/thought.

> At step1 and at step2, if noBumpSites matches, then Squid will peek.
Therefore default splice...

> At step1 and at step2, if noBumpSites does not match, then Squid will stare.
...and default bump.

> At step3, no explicit rules can match so Squid will either splice or bump,
> depending on whether noBumpSites matched at step2.

Yes, just an aclaration: in this specific case "At step3, no explicit rules can 
match" (not anymore). 
All was already done in the previous steps.

> > Do You think that the above rules is more-or-less the more nearest
> > what I want to do? Excuse me but, I think that at this stage, I gues
> > that You already know what I mean when I say "...what I want to do?"
> 
> Sorry, I do not. And since there are many details that define what one wants
> to (or should) do, it may be impractical to relay all of them on an informal
> email thread. However, if you understand how SslBump rules work, then you
> can either answer a vague "Am I doing what I want to be doing?" question
> yourself or ask more specific questions that can be answered on the mailing
> list.

Ok I am sorry for that, I understood that You had an idea about what I want to 
do, in a earlier message.
I will answer with a "little" change in the last config to illustrate what I 
should/want to do in my scenario.

It was: (Again: With this cfg I dont see any domain in TCP_TUNNEL neither the 
Security alerts..)

   ssl_bump peek noBumpSites # Here two steps will happen. And final action 
(splice)  happens at step3 by default.
   ssl_bump stare

And now is: (And with this I see the domain:443 in TCP_TUNNEL and Security 
alerts about the domain and ip match in the logs.)

  ssl_bump splice noBumpSites # This line reachs a splice rule at step1 and 
that is a final action, without 

Re: [squid-users] very poor performance of rock cache ipc

2023-10-16 Thread Julian Taylor

On 15.10.23 05:42, Alex Rousskov wrote:

On 2023-10-14 12:04, Julian Taylor wrote:

On 14.10.23 17:40, Alex Rousskov wrote:

On 2023-10-13 16:01, Julian Taylor wrote:



The reproducer uses as single request, the same very thing can be 
observed on a very busy squid


If a busy Squid sends lots of IPC messages between worker and disker, 
then either there is a Squid bug we do not know about OR that disker is 
just not as busy as one might expect it to be.


In Squid v6+, you can observe disker queues using mgr:store_queues cache 
manager report. In your environment, do those queues always have lots of 
requests when Squid is busy? Feel free to share (a pointer to) a 
representative sample of those reports from your busy Squid.


N.B. Besides worker-disker IPC messages, there are also worker-worker 
cache synchronization IPC messages. They also have the same "do not send 
IPC messages if the queue has some pending items already" optimization.





I checked the queues running with the configuration from my initial mail 
with workers increase and the queues are generally low, around 1-10 
items in the queue when sending around 100 parallel requests reading 
about 100mb data files. Here is a sample: https://dpaste.com/8SLNRW5F8
Also with the higher request rate than the single curl the majority of 
work throughput was more than doubled by increasing the blocksize.


How are the queues supposed to look like on a busy squid that is not 
spending a large portion of its time doing notify IPC?


Increasing the parallel requests does decrease the amount of overhead 
but its still pretty large, I measured about 10%-30% cpu overhead with 
100 parallel requests served from cache in the worker and disker

Here a snipped of a profile:
--22.34%--JobDialer::dial(AsyncCall&)
   |
   |--21.19%--Ipc::UdsSender::start()
   |   |
   |--21.13%--Ipc::UdsSender::write()
   |   |
   |   |--16.12%--Ipc::UdsOp::conn()
   |   |  |
   |   |   --15.84%--comm_open_uds(int, int, 
sockaddr_un*, int)

   |   ||--1.70%--commSetCloseOnExec(int)
   |   | --1.56%--commSetNonBlocking(int)
  ...
--12.98%--comm_close_complete(int)

Clearing and constructing the large Ipc::TypedMsgHdr is also very 
noticeable.


That the overhead and maximum throughput is so low for not so busy 
squids (say 1-10 requests per second but requests on average > 1MiB) is 
imo also a reason for concern and could be improved.


If I understand the way it works correctly e.g. the worker when it gets 
a request splits it into 4k blocks and enqueues read requests into the 
ipc queue and if the queue is empty it emits a notify ipc so the disker 
starts popping from the queue.


On large requests that are answered immediately from the disker the 
problem seems to be that the queue is mostly empty and it sends an ipc 
ping pong for each 4k block.


So my though was when the request is larger than 4k enqueue multiple 
pending reads in the worker and only notify after a certain amount has 
been added to the queue, vice versa in the disker.
So I messed around a bit trying to reduce the notifications by delaying 
the Notify call in src/DiskIO/IpcIo/IpcIoFile.cc for larger requests but 
it ended up blocking after the first queue push with no notify. If I 
understand the queue correctly this is due to the reader requires a 
notify to initially start and and simply pushing multiple read requests 
onto the queue without notifying will not work as trivially as I hoped.


Is this approach feasible or am I misunderstanding how it works?


I also tried to add reusing of the IPC connection between calls so the 
major source of overhead,tearing down and reestablishing the connection, 
is removed but that also turned out difficult as the connections are 
closed in various places and the general complexity of the code.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] very poor performance of rock cache ipc

2023-10-13 Thread Julian Taylor

Hello,
When using squid for caching using the rock cache_dir setting the 
performance is pretty poor with multiple workers.
The reason for this is due to the very high number of systemcalls 
involved in the IPC between the disker and workers.


You can reproduce this very easily with a simple setup with following 
configuration in the current git HEAD and older versions:


maximum_object_size 8 GB
cache_dir rock /cachedir/cache 1024
cache_peer some.host parent 80 3130 default no-query no-digest
http_port 3128

Now download a larger file from some.host through the cache so it cached 
and repeat.


curl --proxy localhost:3128  http://some.host/file >  /dev/null

The download of the cached file from the local machine will be performed 
with a very low rate, on my not ancient machine 35mb/s with everything 
is being cached in memory.


If you check what is happening in the disker you see that it reads a 
4112 byte ipc message from the worker, performs a read of 4KiB size then 
opens a new socket to notifies the worker, does 4 fcntl calls on the 
socket and then sends a 4112 byte (2 x86 pages) size ipc message and 
then closes the socket, this repeats for every 4KiB read and you have 
the same thing in the receiving worker side.


Here an strace of one chunk of the request in the disker:

21:49:28 epoll_wait(7, [{events=EPOLLIN, data={u32=26, u64=26}}], 65536, 
827) = 1 <0.13>
21:49:28 recvmsg(26, {msg_name=0x557d7c4f06b8, msg_namelen=110 => 0, 
msg_iov=[{iov_base="\7\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
iov_len=4112}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 
MSG_DONTWAIT) = 4112 <0.27>
21:49:28 pread64(19, 
"\266E\337\37\374\201b\215\240\310`\216\366\242\350\210\215\22\377zu\302\244Tb\317\255K\10\"p\327"..., 
4096, 10747944) = 4096 <0.15>

21:49:28 socket(AF_UNIX, SOCK_DGRAM, 0) = 11 <0.21>
21:49:28 fcntl(11, F_GETFD) = 0 <0.11>
21:49:28 fcntl(11, F_SETFD, FD_CLOEXEC) = 0 <0.11>
21:49:28 fcntl(11, F_GETFL) = 0x2 (flags O_RDWR) <0.11>
21:49:28 fcntl(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0 <0.12>
21:49:28 epoll_ctl(7, EPOLL_CTL_ADD, 11, 
{events=EPOLLOUT|EPOLLERR|EPOLLHUP, data={u32=11, u64=11}}) = 0 <0.23>
21:49:28 epoll_wait(7, [{events=EPOLLOUT, data={u32=11, u64=11}}], 
65536, 826) = 1 <0.15>
21:49:28 sendmsg(11, {msg_name={sa_family=AF_UNIX, 
sun_path="/tmp/local/var/run/squid/squid-kid-2.ipc"}, msg_namelen=42, 
msg_iov=[{iov_base="\7\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
iov_len=4112}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 
MSG_NOSIGNAL) = 4112 <0.22>

21:49:28 epoll_ctl(7, EPOLL_CTL_DEL, 11, 0x7ffef63da174) = 0 <0.14>
21:49:28 close(11)  = 0 <0.18>


Pocking around a bit in the code I have found that by increasing the 
HTTP_REQBUF_SZ in src/http/forward.h to 32KiB also affects the read size 
on the disker making it 8 times more efficient which is ok (but not great).
(This does not work the same anymore with 
https://github.com/squid-cache/squid/pull/1335 recently added to 6.x 
backports, but the 4KiB issue remains in current master)


This problem is very noticeable on large objects but the extrem overhead 
per disk cache request should affect most disk cached objects.


Is it necessary to have these read chunks so small and the processes 
opening and closing sockets for every single request instead of reusing 
an open socket?
At least the 4 fcntl calls could be removed/reduced to 1 though that 
only gains 10-30% compared to 800% of increasing the read size.
Reducing the 4112 byte ipc message with only has 4 bytes of data to 
lower values also results in measurable improvements (though dangerous 
as squid crashes if its too low and receives cachemanager requests which 
seem to be around 600 bytes in length).


If the small chunk sizes are needed for certain use cases I would love a 
configuration flag to set it to higher values (higher even that the 
current maximum of mem::pagessize 32KiB) if that fits the use case. In 
the case I noticed this the average object size in the cache was in the 
megabyte range.


Currently without recompiling squid using the rock cache (the only one 
supported for SMP) utilizing modern hardware with 10G or more network 
and SSD disks does not seem feasible unless I missed some configuration 
option which may help here.


Cheers,
Julian
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] very poor performance of rock cache ipc

2023-10-14 Thread Julian Taylor

On 14.10.23 17:40, Alex Rousskov wrote:

On 2023-10-13 16:01, Julian Taylor wrote:

When using squid for caching using the rock cache_dir setting the 
performance is pretty poor with multiple workers.
The reason for this is due to the very high number of systemcalls 
involved in the IPC between the disker and workers.


Please allow me to rephrase your conclusion to better match (expected) 
reality and avoid misunderstanding:


By design, a mostly idle SMP Squid should use a lot more system calls 
per disk cache hit than a busy SMP Squid would:


* Mostly idle Squid: Every disk I/O may require a few IPC messages.
* Busy Squid: Bugs notwithstanding, disk I/Os require no IPC messages.


In your single-request test, you are observing the expected effects 
described in the first bullet. That does not imply those effects are 
"good" or "desirable" in your use case, of course. It only means that 
SMP Squid was no optimized for that use case; SMP rock design was 
explicitly targeting the opposite use case (i.e. a busy Squid).


The reproducer uses as single request, the same very thing can be 
observed on a very busy squid and workaround improves both the single 
request case and the actual heavy loaded production squid in the same way.


The hardware involved has a 10G card, not ssds but lots of ram so it has 
a very high page cache hit rate and the squid is very busy, so much it 
is overloaded by system cpu usage in default configuration with the rock 
cache. The network or disk bandwidth is barely ever utilized more than 
10% with all 8 cpus busy on system load.
The only way to get the squid to utilize the machine is to increase the 
IO size via the request buffer change or not use the rock cache. UFS 
cache works ok in comparison, but requires multiple independent squid 
instances as it does not support SMP.


Increasing the IO size to 32KiB as I mentioned does allow the squid 
workers to utilize a good 60% of the hardware network and disk capabilities.




Roughly speaking, here, "busy" means "there are always some messages in 
the disk I/O queue [maintained by Squid in shared memory]".



You may wonder how it is possible that an increase in I/O work results 
in decrease (and, hopefully, elimination) of related IPC messages. 
Roughly speaking, a worker must send an IPC "you have a new I/O request" 
message only when its worker->disker queue is empty. If the queue is not 
empty, then there is no reason to send an IPC message to wake up disker 
because disker will see the new message when dequeuing the previous one. 
Same for the opposite direction: disker->worker...


This is probably true if you have slow disks and are actually IO bound, 
but on fast disks or high page cache hit rate you essential see this ipc 
ping pong and very little actual work being done.





 > Is it necessary to have these read chunks so small

It is not. Disk I/O size should be at least the system I/O page size, 
but it can be larger. The optimal I/O size is probably very dependent on 
traffic patterns. IIRC, Squid I/O size is at most one Squid page 
(SM_PAGE_SIZE or 4KB).


FWIW, I suspect there are significant inefficiencies in disk I/O related 
request alignment: The code does not attempt to read from and write to 
disk page boundaries, probably resulting in multiple low-level disk I/Os 
per one Squid 4KB I/O in some (many?) cases. With modern non-rotational 
storage these effects are probably less pronounced, but they probably 
still exist.
The kernel drivers will mostly handle this for you if multiple requests 
are available, but this is also almost irrelevant with current hardware, 
typically it will be so fast software overhead will make it hard to 
utilize modern large disk arrays properly you probably need to look at 
other approaches like io_ring to get rid of the classical read/write 
systemcall overhead dominating your performance.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard configuration test - echo test

2015-06-07 Thread Jose Julian Buda

On 2015-06-07 12:15, Helmut Hullen wrote:

Hallo, Marcus,

Du meintest am 07.06.15:


Hi, i have installed squidGuard 1.5 on Debian Jessie and i need an
user based filter, made the src/dest/acl setting and then test with
:

echo http://www.testsite.com 192.168.0.82/ someuserfromauth GET |
squidGuard -d



The URL director interface was changed with Squid 3.4, see also
http://wiki.squid-cache.org/Features/RedirectorsThe latest version 
of

squidguard is 1.5 beta from 2010 and squidGuard does not support the
new interface of Squid.


Sure?
I run squid-3.4.10 and squidGuard-1.5beta on many machines, without
having changed the Redirector line in /etc/squid/squid.conf.

Viele Gruesse!
Helmut

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users




Thank you Helmut for this information.
I guess squidGuard 1.5.4 should work on DebianStable's Squid3.4
So anything about the ERR answer for this test?

echo http://www.testsite.com 192.168.0.82/ someuserfromauth GET | 
squidGuard -d


...
2015-06-07 00:08:44 [3359] INFO: squidGuard 1.5 started 
(1433646524.285)
2015-06-07 00:08:44 [3359] INFO: squidGuard ready for requests 
(1433646524.286)

ERR
2015-06-07 00:08:44 [3359] INFO: squidGuard stopped (1433646524.287
...

is this what is expected for a passing request?
Can anyone do me a favor, do this and tell me what see?


Thank you very much in advance
Julian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard configuration test - echo test

2015-06-07 Thread Jose Julian Buda

thank Marcus for the answer

but maybe at this time squidguard does work on debian jessie :

https://packages.debian.org/jessie/squidguard

http://metadata.ftp-master.debian.org/changelogs/main/s/squidguard/stable_changelog

http://metadata.ftp-master.debian.org/changelogs/main/s/squidguard/squidguard_1.5-4_squidguard.README.Debian


for another hand, ufdbGuard is not on the debian repository

my question is what mean this lines when i run the echo test.

.
2015-06-07 00:08:44 [3359] INFO: squidGuard 1.5 started 
(1433646524.285)
2015-06-07 00:08:44 [3359] INFO: squidGuard ready for requests 
(1433646524.286)

ERR
2015-06-07 00:08:44 [3359] INFO: squidGuard stopped (1433646524.287)
.

as i can see in squid.conf, squid wait for this kind of answer(ERR) 
from the redirector



but if it does'nt work anyway, is there another squid 
url_rewrite_program supported by debian ?


thank you in advance

Julian



On 2015-06-07 10:24, Marcus Kool wrote:

The URL director interface was changed with Squid 3.4, see also
http://wiki.squid-cache.org/Features/Redirectors
The latest version of squidguard is 1.5 beta from 2010 and squidGuard
does not support the new interface of Squid.

ufdbGuard is also a URL redirector and since it has regular updates,
ufdbGuard is compatible with the new URL redirector interface of
Squid.
ufdbGuard is free software with a GPL2 license, 99% compatible with
squidGuard, 3x faster, has a lower memory footprint and maintained by
the Dutch company URLfilterDB B.V.
So you have every reason to switch from squiddGuard to ufdbGuard.

Marcus


On 06/07/2015 07:57 AM, Jose Julian Buda wrote:


Hi, i have installed squidGuard 1.5 on Debian Jessie and i need an 
user based filter, made the src/dest/acl setting and then test with :


echo http://www.testsite.com 192.168.0.82/ someuserfromauth GET | 
squidGuard -d


..
2015-06-07 00:08:44 [3359] INFO: squidGuard 1.5 started 
(1433646524.285)
2015-06-07 00:08:44 [3359] INFO: squidGuard ready for requests 
(1433646524.286)

ERR
2015-06-07 00:08:44 [3359] INFO: squidGuard stopped (1433646524.287)
..


this someuserfromauth is supposed to pass the filter and get the 
testsite.


My question is what mean the ERR line?

In the squid.conf said that the result field ERR from 
url_rewrite_program mean Do not change the url

, so it is that what it mean? or it should be OK ?

on some mailing list answers said that this lines shoulbe empty for 
passing request.


is it ok with ERR on this line?

There is no way to test with a browser, it is a remote(ssh) 
configuration and i need it on production by monday.


Thank you in advance

Julian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squidGuard configuration test - echo test

2015-06-07 Thread Jose Julian Buda


Hi, i have installed squidGuard 1.5 on Debian Jessie and i need an user 
based filter, made the src/dest/acl setting and then test with :


echo http://www.testsite.com 192.168.0.82/ someuserfromauth GET | 
squidGuard -d


..
2015-06-07 00:08:44 [3359] INFO: squidGuard 1.5 started 
(1433646524.285)
2015-06-07 00:08:44 [3359] INFO: squidGuard ready for requests 
(1433646524.286)

ERR
2015-06-07 00:08:44 [3359] INFO: squidGuard stopped (1433646524.287)
..


this someuserfromauth is supposed to pass the filter and get the 
testsite.


My question is what mean the ERR line?

In the squid.conf said that the result field ERR from 
url_rewrite_program mean Do not change the url

, so it is that what it mean? or it should be OK ?

on some mailing list answers said that this lines shoulbe empty for 
passing request.


is it ok with ERR on this line?

There is no way to test with a browser, it is a remote(ssh) 
configuration and i need it on production by monday.


Thank you in advance

Julian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard configuration test - echo test

2015-06-10 Thread Jose Julian Buda



On 08/06/15 08:10, Helmut Hullen wrote:

Hallo, Amos,

Du meintest am 08.06.15:


Under squid 3.4 (and many earlier versions) I use

 url_rewrite_program /usr/bin/squidGuard

How must I change this line for squid 3.5?



You should not have to change the SG command line or configuration.


Ok!


Whats needed is a patch from
http://bugs.squid-cache.org/show_bug.cgi?id=3978  to be applied to
SGitself. If you are using an OS provided SG binary check to see if
theyhave already patched it.


It's not patched in my version, but it works under squid 3.4.10 -
strange.


The above page mentions

http://www.eu.squid-cache.org/Doc/config/url_rewrite_extras

but  this page doesn't yet exist.


[...]


That should be:



   http://www.squid-cache.org/Doc/config/url_rewrite_extras/
and
   http://www.squid-cache.org/Doc/config/url_rewrite_program/


Ok - now I can read the pages!

Viele Gruesse!
Helmut

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



Thank you all, squidGuard 1.5.4 from Debian's repository, work fine on 
DebianJessie's Squid 3.4,  i've made it work yesterday, and it does it good.
ERR from squidGuard means Do not change the URL, and let squid pass 
the request.


Thank you for your time.

Julian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidGuard configuration test - echo test [SOLVED]

2015-06-10 Thread Jose Julian Buda



On 08/06/15 08:10, Helmut Hullen wrote:

Hallo, Amos,

Du meintest am 08.06.15:


Under squid 3.4 (and many earlier versions) I use

 url_rewrite_program /usr/bin/squidGuard

How must I change this line for squid 3.5?



You should not have to change the SG command line or configuration.


Ok!


Whats needed is a patch from
http://bugs.squid-cache.org/show_bug.cgi?id=3978  to be applied to
SGitself. If you are using an OS provided SG binary check to see if
theyhave already patched it.


It's not patched in my version, but it works under squid 3.4.10 -
strange.


The above page mentions

http://www.eu.squid-cache.org/Doc/config/url_rewrite_extras

but  this page doesn't yet exist.


[...]


That should be:



   http://www.squid-cache.org/Doc/config/url_rewrite_extras/
and
   http://www.squid-cache.org/Doc/config/url_rewrite_program/


Ok - now I can read the pages!

Viele Gruesse!
Helmut

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Thank you all, squidGuard 1.5.4 from Debian's repository, work fine on 
DebianJessie's Squid 3.4,  i've made it work yesterday, and it does it good.
ERR from squidGuard means Do not change the URL, and let squid pass 
the request.


Thank you for your time.

Julian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users