Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-22 Thread Eliezer Croitoru

Can you share the relevant squid.conf settings? Just to reproduce..

I have a dedicated testing server here which I can test the issue on.
8GB archive which might be an ISO and can be cached on AUFS\UFS and 
LARGE ROCK cache types.


I am pretty sure that the maximum cache object size is one thing to 
change and waht more?


From What I understand it should not be different for 2GB cached 
archive and to 8 GB cached archive.

I have a local copy of centos 7 ISO which should be a test worthy object.
Anything more you can add to the test subject?

Eliezer

On 22/07/2015 16:24, Jens Offenbach wrote:

I checked the bug you have mentioned and I think I am confronted with the same
issue. I was able to build and test Squid 3.5.6 on Ubuntu 14.04.2 x84_64. I
observed the same behavior. I have tested an 8 GB archive file and I get 100 %
CPU usage and a download rate of nearly 500 KB/sec when the object gets cached.
I have attached strace to the running process, but I killed it after 30 minutes.
The whole takes hours, although we have a 1-GBit ethernet:

Process 4091 attached
Process 4091 detached
% time seconds usecs/call calls errors syscall
-- --- --- - - 
78.83 2.622879 1 1823951 write
12.29 0.408748 2 228029 2 read
6.18 0.205663 0 912431 1 epoll_wait
2.58 0.085921 0 456020 epoll_ctl
0.09 0.002919 0 6168 brk
0.02 0.000623 2 356 openat
0.01 0.000286 0 712 getdents
0.00 0.71 1 91 getrusage
0.00 0.38 0 362 close
0.00 0.03 2 2 sendto
0.00 0.01 0 3 1 recvfrom
0.00 0.00 0 2 open
0.00 0.00 0 3 stat
0.00 0.00 0 1 1 rt_sigreturn
0.00 0.00 0 1 kill
0.00 0.00 0 4 fcntl
0.00 0.00 0 2 2 unlink
0.00 0.00 0 1 getppid
-- --- --- - - 
100.00 3.327152 3428139 7 total

Can I do anything that helps to get ride of this problem?


Gesendet: Dienstag, 21. Juli 2015 um 17:37 Uhr
Von: Amos Jeffries squ...@treenet.co.nz
An: Jens Offenbach wolle5...@gmx.de, squid-users@lists.squid-cache.org
squid-users@lists.squid-cache.org
Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object caching
On 22/07/2015 12:31 a.m., Jens Offenbach wrote:
   Thank you very much for your detailed explainations. We want to use Squid in
   order to accelerate our automated software setup processes via Puppet. 
Actually
   Squid will host only a very short amount of large objects (10-20). Its 
purpose
   is not to cache web traffic or little objects.

Ah, Squid does not host, it caches. The difference may seem trivial at
first glance but it is the critical factor between whether a proxy or a
local web server is the best tool for the job.

  From my own experiences with Puppet, yes Squid is the right tool. But
only because the Puppet server was using relatively slow python code to
generate objects and not doing server-side caching on its own. If that
situation has changed in recent years then Squids usefulness will also
have changed.


   The hit-ratio for all the hosted
   objects will be very high, because most of our VMs require the same software
stack.
   I will update mit config regarding to your comments! Thanks a lot!
   But actually I have still no idea, why the download rates are so 
unsatisfying.
   We are sill in the test phase. We have only one client that requests a large
   object from Squid and the transfer rates are lower than 1 MB/sec during 
cache
   build-up without any form of concurrency. Have vou got an idea what could 
be the
   source of the problem here? Why causes the Squid process 100 % CPU usage.

I did not see any config causing the known 100% CPU bugs to be
encountered in your case (eg. HTTPS going through delay pools guarantees
100% CPU). Which leads me to think its probably related to memory
shuffling. (http://bugs.squid-cache.org/show_bug.cgi?id=3189
https://3c.gmx.net/mail/client/dereferrer?redirectUrl=http%3A%2F%2Fbugs.squid-cache.org%2Fshow_bug.cgi%3Fid%3D3189
appears
to be the same and still unidentified)

As for speed, if the CPU is maxed out by one particular action Squid
wont have time for much other work. So things go slow.

On the other hand Squid is also optimized for relatively high traffic
usage. For very small client counts (such as under-10) it is effectively
running in idle mode 99% of the time. The I/O event loop starts pausing
for 10ms blocks waiting to see if some more useful amount of work can be
done at the end of the wait. That can lead to apparent network slowdown
as TCP gets up to 10ms delay per packet. But that should not be visible
in CPU numbers.


That said, 1 client can still max out Squid CPU and/or NIC throughput
capacity on a single request if its pushing/pulling packets fast enough.


If you can attach the strace tool to Squid when its consuming the CPU
there might be some better hints about where to look.


Cheers
Amos



___
squid-users mailing list
squid-users@lists.squid-cache.org

Re: [squid-users] suppress sending authentication prompt

2015-07-22 Thread Berkes, David
Thank you very much for your help.  Yes, I agree it's not the approach I would 
like to take.  I believe it may be something to do with the MDM and/or the IOS. 
 I'm setting up a tcpdump to look at the packets.  What I see is the 
authentication pop-up occurs on the iphone, but the credentials have already 
authenticated.  So, the users hit the cancel button and traffic is allowed to 
proxy.  Below is output of the access log.  I do notice that the TCP_DENIED 
messages, which I don’t understand.  Maybe this is part of the issue?

---access.log
1437577600.112   1612 70.197.232.249 TCP_TUNNEL/200 1728 CONNECT 
myproxyserver.com:443 myproxyuser HIER_DIRECT/206.15.205.62 -
1437577600.120   2089 70.197.232.249 TCP_TUNNEL/200 1728 CONNECT 
myproxyserver.com:443 myproxyuser HIER_DIRECT/206.15.205.62 -
1437577601.253   2161 70.197.232.249 TCP_TUNNEL/200 5677 CONNECT 
myproxyserver.com:443 myproxyuser HIER_DIRECT/206.15.205.62 -
1437577601.362  0 70.197.232.249 TCP_DENIED/407 4074 CONNECT 
myproxyserver.com:443 - HIER_NONE/- text/html

Here is my configuration.  Can you tell me specifically where to place the 
all and/or oder to properly test and block Squid actively requesting 
credentials?

##
auth_param basic program /usr/lib64/squid/basic_ncsa_auth 
/etc/squid/squid_passwd
auth_param basic children 20
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 8 hours
auth_param basic casesensitive on

acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
http_access deny all

http_port 3128
##

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Wednesday, July 22, 2015 6:55 AM
To: Berkes, David; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] suppress sending authentication prompt

On 22/07/2015 3:36 a.m., Berkes, David wrote:
 Thank you.
 From the tcpdump, I see the iphone sending requests to the proxy.  Sometimes 
 with credentials and sometimes not.  How can I tell squid to not send 407 in 
 response to the header with no credentials?  I have tried the following 
 variations with no luck.


Think about that for a minute.

If Squid is never allowed to *ask* for credentials. How will it get them?

Do you really want the browser actively broadcasting usernames and passwords in 
trivially decrypted format out into the network regardless of where its 
connecting to?

You can block Squid actively requesting credentials by adding  all to the end 
of the http_access line(s) that would otherwise end with ncsa_users ACL check. 
However, that will only cause the browser to display an error page. Access 
Denied, end of transaction, full stop, dont try again.



Remember that the popup is *not* part of HTTP messaging nor the HTTP level 
authentication. It is purely a browser internal mechanism for locating 
credentials.

407 is a perfectly normal HTTP operation. A working browser would always answer 
Squid 407 queries by sending the MDM configured cerdentials, with
*zero* user involvement.

I suspect that perhapse your MDM system is tying the credentials to an
IPv4 address, and the iPhone using IPv6 on some traffic?
 Or maybe the browser really is braindead and forgetting how to lookup the 
credentials.

Amos




Piper Jaffray  Co. Since 1895. Member SIPC and NYSE. Learn more at 
www.piperjaffray.com. Piper Jaffray corporate headquarters is located at 800 
Nicollet Mall, Minneapolis, MN 55402.

Piper Jaffray outgoing and incoming e-mail is electronically archived and 
recorded and is subject to review, monitoring and/or disclosure to someone 
other than the recipient. This e-mail may be considered an advertisement or 
solicitation for purposes of regulation of commercial electronic mail messages. 
If you do not wish to receive commercial e-mail communications from Piper 
Jaffray, go to: www.piperjaffray.com/do_not_email to review the details and 
submit your request to be added to the Piper Jaffray Do Not E-mail Registry. 
For additional disclosure information see www.piperjaffray.com/disclosures
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-22 Thread Jens Offenbach
I will send you my current settings tomorrow. I have used AUFS as caching 
format, but I have also tested UFS. The format seems to have no influence on 
the issue.

I have tested the 1 GB Ubuntu 15.04 image (ubuntu-15.04-desktop-amd64.iso). 
This is the link 
http://releases.ubuntu.com/15.04/ubuntu-15.04-desktop-amd64.iso.

If you want to stress caching more with large files. You can use one of those:
https://surfer.nmr.mgh.harvard.edu/fswiki/Download

But I think the Centos 7 ISO are large enough, In my test scenario, I have put 
all files on an internal web server with gives them in stable 120 MB/sec. So 
the problem does not come from a slow network connection. I have checked 
network connectivity with Iperf3 (= 900 MBit/sec) and made a direct wget 
without Squid. The file gets downloaded in high speed. Adding Squid in the 
communication flow which caches the file on the first request and the issue 
occurrs. After some minutes, the download rate drops to 500 KByte/sec and stays 
on this level together with 100 % CPU load. The download rate corresponds with 
the disk IO. The file gets written with 500 KByte/sec.

Thank you very much! 
 

Gesendet: Mittwoch, 22. Juli 2015 um 18:28 Uhr
Von: Eliezer Croitoru elie...@ngtech.co.il
An: squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
Can you share the relevant squid.conf settings? Just to reproduce..

I have a dedicated testing server here which I can test the issue on.
8GB archive which might be an ISO and can be cached on AUFS\UFS and
LARGE ROCK cache types.

I am pretty sure that the maximum cache object size is one thing to
change and waht more?

From What I understand it should not be different for 2GB cached
archive and to 8 GB cached archive.
I have a local copy of centos 7 ISO which should be a test worthy object.
Anything more you can add to the test subject?

Eliezer

On 22/07/2015 16:24, Jens Offenbach wrote:
 I checked the bug you have mentioned and I think I am confronted with the same
 issue. I was able to build and test Squid 3.5.6 on Ubuntu 14.04.2 x84_64. I
 observed the same behavior. I have tested an 8 GB archive file and I get 100 %
 CPU usage and a download rate of nearly 500 KB/sec when the object gets 
 cached.
 I have attached strace to the running process, but I killed it after 30 
 minutes.
 The whole takes hours, although we have a 1-GBit ethernet:

 Process 4091 attached
 Process 4091 detached
 % time seconds usecs/call calls errors syscall
 -- --- --- - - 
 78.83 2.622879 1 1823951 write
 12.29 0.408748 2 228029 2 read
 6.18 0.205663 0 912431 1 epoll_wait
 2.58 0.085921 0 456020 epoll_ctl
 0.09 0.002919 0 6168 brk
 0.02 0.000623 2 356 openat
 0.01 0.000286 0 712 getdents
 0.00 0.71 1 91 getrusage
 0.00 0.38 0 362 close
 0.00 0.03 2 2 sendto
 0.00 0.01 0 3 1 recvfrom
 0.00 0.00 0 2 open
 0.00 0.00 0 3 stat
 0.00 0.00 0 1 1 rt_sigreturn
 0.00 0.00 0 1 kill
 0.00 0.00 0 4 fcntl
 0.00 0.00 0 2 2 unlink
 0.00 0.00 0 1 getppid
 -- --- --- - - 
 100.00 3.327152 3428139 7 total

 Can I do anything that helps to get ride of this problem?


 Gesendet: Dienstag, 21. Juli 2015 um 17:37 Uhr
 Von: Amos Jeffries squ...@treenet.co.nz
 An: Jens Offenbach wolle5...@gmx.de, squid-users@lists.squid-cache.org
 squid-users@lists.squid-cache.org
 Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object 
 caching
 On 22/07/2015 12:31 a.m., Jens Offenbach wrote:
  Thank you very much for your detailed explainations. We want to use Squid in
  order to accelerate our automated software setup processes via Puppet. 
  Actually
  Squid will host only a very short amount of large objects (10-20). Its 
  purpose
  is not to cache web traffic or little objects.

 Ah, Squid does not host, it caches. The difference may seem trivial at
 first glance but it is the critical factor between whether a proxy or a
 local web server is the best tool for the job.

 From my own experiences with Puppet, yes Squid is the right tool. But
 only because the Puppet server was using relatively slow python code to
 generate objects and not doing server-side caching on its own. If that
 situation has changed in recent years then Squids usefulness will also
 have changed.


  The hit-ratio for all the hosted
  objects will be very high, because most of our VMs require the same software
 stack.
  I will update mit config regarding to your comments! Thanks a lot!
  But actually I have still no idea, why the download rates are so 
  unsatisfying.
  We are sill in the test phase. We have only one client that requests a large
  object from Squid and the transfer rates are lower than 1 MB/sec during 
  cache
  build-up without any form of concurrency. Have vou got an idea what could 
  be the
  source of the problem here? Why causes the Squid process 100 % CPU usage.

 I did 

Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-22 Thread Eliezer Croitoru

Hey Jens,

I have tested the issue with LARGE ROCK and not AUFS or UFS.
Using squid or not my connection to the server is about 2.5 MBps (20Mbps).
Squid is sitting on an intel atom with SSD drive and on a HIT case the 
download speed is more then doubled to 4.5 MBps(36Mbps).

I have not tried it with AUFS yet.

My testing machine is an ARCH linux with self compiled squid with the 
replacement of diskd to rock from the compilation options of arch linux.


You can take a look at the HIT log at:
http://paste.ngtech.co.il/pnhkglgsu

Eliezer

On 22/07/2015 21:07, Jens Offenbach wrote:

I will send you my current settings tomorrow. I have used AUFS as caching 
format, but I have also tested UFS. The format seems to have no influence on 
the issue.

I have tested the 1 GB Ubuntu 15.04 image (ubuntu-15.04-desktop-amd64.iso). 
This is the link 
http://releases.ubuntu.com/15.04/ubuntu-15.04-desktop-amd64.iso.

If you want to stress caching more with large files. You can use one of those:
https://surfer.nmr.mgh.harvard.edu/fswiki/Download

But I think the Centos 7 ISO are large enough, In my test scenario, I have put all 
files on an internal web server with gives them in stable 120 MB/sec. So the 
problem does not come from a slow network connection. I have checked network 
connectivity with Iperf3 (= 900 MBit/sec) and made a direct wget without 
Squid. The file gets downloaded in high speed. Adding Squid in the communication 
flow which caches the file on the first request and the issue occurrs. After some 
minutes, the download rate drops to 500 KByte/sec and stays on this level together 
with 100 % CPU load. The download rate corresponds with the disk IO. The file gets 
written with 500 KByte/sec.

Thank you very much!


Gesendet: Mittwoch, 22. Juli 2015 um 18:28 Uhr
Von: Eliezer Croitoru elie...@ngtech.co.il
An: squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
Can you share the relevant squid.conf settings? Just to reproduce..

I have a dedicated testing server here which I can test the issue on.
8GB archive which might be an ISO and can be cached on AUFS\UFS and
LARGE ROCK cache types.

I am pretty sure that the maximum cache object size is one thing to
change and waht more?

 From What I understand it should not be different for 2GB cached
archive and to 8 GB cached archive.
I have a local copy of centos 7 ISO which should be a test worthy object.
Anything more you can add to the test subject?

Eliezer

On 22/07/2015 16:24, Jens Offenbach wrote:

I checked the bug you have mentioned and I think I am confronted with the same
issue. I was able to build and test Squid 3.5.6 on Ubuntu 14.04.2 x84_64. I
observed the same behavior. I have tested an 8 GB archive file and I get 100 %
CPU usage and a download rate of nearly 500 KB/sec when the object gets cached.
I have attached strace to the running process, but I killed it after 30 minutes.
The whole takes hours, although we have a 1-GBit ethernet:

Process 4091 attached
Process 4091 detached
% time seconds usecs/call calls errors syscall
-- --- --- - - 
78.83 2.622879 1 1823951 write
12.29 0.408748 2 228029 2 read
6.18 0.205663 0 912431 1 epoll_wait
2.58 0.085921 0 456020 epoll_ctl
0.09 0.002919 0 6168 brk
0.02 0.000623 2 356 openat
0.01 0.000286 0 712 getdents
0.00 0.71 1 91 getrusage
0.00 0.38 0 362 close
0.00 0.03 2 2 sendto
0.00 0.01 0 3 1 recvfrom
0.00 0.00 0 2 open
0.00 0.00 0 3 stat
0.00 0.00 0 1 1 rt_sigreturn
0.00 0.00 0 1 kill
0.00 0.00 0 4 fcntl
0.00 0.00 0 2 2 unlink
0.00 0.00 0 1 getppid
-- --- --- - - 
100.00 3.327152 3428139 7 total

Can I do anything that helps to get ride of this problem?


Gesendet: Dienstag, 21. Juli 2015 um 17:37 Uhr
Von: Amos Jeffries squ...@treenet.co.nz
An: Jens Offenbach wolle5...@gmx.de, squid-users@lists.squid-cache.org
squid-users@lists.squid-cache.org
Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object caching
On 22/07/2015 12:31 a.m., Jens Offenbach wrote:

Thank you very much for your detailed explainations. We want to use Squid in
order to accelerate our automated software setup processes via Puppet. Actually
Squid will host only a very short amount of large objects (10-20). Its purpose
is not to cache web traffic or little objects.


Ah, Squid does not host, it caches. The difference may seem trivial at
first glance but it is the critical factor between whether a proxy or a
local web server is the best tool for the job.

 From my own experiences with Puppet, yes Squid is the right tool. But
only because the Puppet server was using relatively slow python code to
generate objects and not doing server-side caching on its own. If that
situation has changed in recent years then Squids usefulness will also
have changed.



The hit-ratio for all the hosted
objects will be very high, because most of our 

Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-22 Thread Jens Offenbach
I checked the bug you have mentioned and I think I am confronted with the same issue. I was able to build and test Squid 3.5.6 on Ubuntu 14.04.2 x84_64. I observed the same behavior. I have tested an 8 GB archive file and I get 100 % CPU usage and a download rate of nearly 500 KB/sec when the object gets cached. I have attached strace to the running process, but I killed it after 30 minutes. The whole takes hours, although we have a 1-GBit ethernet:

Process 4091 attached
Process 4091 detached
% time seconds usecs/call calls errors syscall
-- --- --- - - 
78.83 2.622879 1 1823951 write
12.29 0.408748 2 228029 2 read
6.18 0.205663 0 912431 1 epoll_wait
2.58 0.085921 0 456020 epoll_ctl
0.09 0.002919 0 6168 brk
0.02 0.000623 2 356 openat
0.01 0.000286 0 712 getdents
0.00 0.71 1 91 getrusage
0.00 0.38 0 362 close
0.00 0.03 2 2 sendto
0.00 0.01 0 3 1 recvfrom
0.00 0.00 0 2 open
0.00 0.00 0 3 stat
0.00 0.00 0 1 1 rt_sigreturn
0.00 0.00 0 1 kill
0.00 0.00 0 4 fcntl
0.00 0.00 0 2 2 unlink
0.00 0.00 0 1 getppid
-- --- --- - - 
100.00 3.327152 3428139 7 total

Can I do anything that helps to get ride of this problem?


Gesendet:Dienstag, 21. Juli 2015 um 17:37 Uhr
Von:Amos Jeffries squ...@treenet.co.nz
An:Jens Offenbach wolle5...@gmx.de, squid-users@lists.squid-cache.org squid-users@lists.squid-cache.org
Betreff:Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object caching
On 22/07/2015 12:31 a.m., Jens Offenbach wrote:
 Thank you very much for your detailed explainations. We want to use Squid in
 order to accelerate our automated software setup processes via Puppet. Actually
 Squid will host only a very short amount of large objects (10-20). Its purpose
 is not to cache web traffic or little objects.

Ah, Squid does not host, it caches. The difference may seem trivial at
first glance but it is the critical factor between whether a proxy or a
local web server is the best tool for the job.

>From my own experiences with Puppet, yes Squid is the right tool. But
only because the Puppet server was using relatively slow python code to
generate objects and not doing server-side caching on its own. If that
situation has changed in recent years then Squids usefulness will also
have changed.


 The hit-ratio for all the hosted
 objects will be very high, because most of our VMs require the same software stack.
 I will update mit config regarding to your comments! Thanks a lot!
 But actually I have still no idea, why the download rates are so unsatisfying.
 We are sill in the test phase. We have only one client that requests a large
 object from Squid and the transfer rates are lower than 1 MB/sec during cache
 build-up without any form of concurrency. Have vou got an idea what could be the
 source of the problem here? Why causes the Squid process 100 % CPU usage.

I did not see any config causing the known 100% CPU bugs to be
encountered in your case (eg. HTTPS going through delay pools guarantees
100% CPU). Which leads me to think its probably related to memory
shuffling. (http://bugs.squid-cache.org/show_bug.cgi?id=3189 appears
to be the same and still unidentified)

As for speed, if the CPU is maxed out by one particular action Squid
wont have time for much other work. So things go slow.

On the other hand Squid is also optimized for relatively high traffic
usage. For very small client counts (such as under-10) it is effectively
running in idle mode 99% of the time. The I/O event loop starts pausing
for 10ms blocks waiting to see if some more useful amount of work can be
done at the end of the wait. That can lead to apparent network slowdown
as TCP gets up to 10ms delay per packet. But that should not be visible
in CPU numbers.


That said, 1 client can still max out Squid CPU and/or NIC throughput
capacity on a single request if its pushing/pulling packets fast enough.


If you can attach the strace tool to Squid when its consuming the CPU
there might be some better hints about where to look.


Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-22 Thread Eliezer Croitoru

On 22/07/2015 21:59, Eliezer Croitoru wrote:

Hey Jens,

I have tested the issue with LARGE ROCK and not AUFS or UFS.
Using squid or not my connection to the server is about 2.5 MBps (20Mbps).
Squid is sitting on an intel atom with SSD drive and on a HIT case the
download speed is more then doubled to 4.5 MBps(36Mbps).
I have not tried it with AUFS yet.



And I must admit that AUFS beats rock cache with speed.
I have tried rock with basic cache_dir rock /var/spool/squid 8000 vs 
cache_dir aufs /var/spool/squid 8000 16 256 and the aufs cache HIT 
results more then doubles 3 the speed rock gave with default settings.


So about 15MBps which is 120Mbps.
I do not seem to feel what Jens feels but the 100% CPU might be because 
of spinning disk hangs while reading the file from disk.


Amos, I remember that there were some suggestions how to tune large rock.
Any hints?
I can test it and make it a suggestion for big files.

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-22 Thread Alex Wu
We do not use cache-peer. I thought cache-peer is for connecting another 
squid-like proxy server.

Without ssl-bump, the connection is tunneled transparently, so there is no 
chance to redirect each HTTP requests proxied under SSL connection.

We want to redirec each HTTP requests under SSL, so we have to use ssl-bump to 
terminate the connection, and squid open another connection to the server we 
use content redirect to specify. The code take effect when the squid opens the 
new connection to each designed server for each HTTPS requests.

We terminated CONNECT call at squid also to ensure we can intercept HTTP 
requests at squid,

Alex
 To: squid-users@lists.squid-cache.org
 From: squ...@treenet.co.nz
 Date: Thu, 23 Jul 2015 00:21:31 +1200
 Subject: Re: [squid-users] SSL connction failed due to SNI after content 
 redirection
 
 On 22/07/2015 12:44 p.m., Alex Wu wrote:
  it depends on how you set up squid, and where the connection is broken. The 
  patch addessed the issue that occured using sslbump and content redirect 
  together.
  
 
 I'd like some clarification what the exact problem symptoms are please.
 
 AFAIK, both redirect and re-write actions happen a relatively long time
 *after* the bumping TLS handshakes to server are completed. Its far too
 late to send the pre-handshake SNI data to the server.
 
 I can see this change as affecting reverse-proxy / CDN configurations
 with TLS on both connections. But you said this was SSL-bumping, and
 reverse-proxy configurations already have a cache_peer option to set the
 internal domain name without re-write/redirect.
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] suppress sending authentication prompt

2015-07-22 Thread Amos Jeffries
On 22/07/2015 3:36 a.m., Berkes, David wrote:
 Thank you.
 From the tcpdump, I see the iphone sending requests to the proxy.  Sometimes 
 with credentials and sometimes not.  How can I tell squid to not send 407 in 
 response to the header with no credentials?  I have tried the following 
 variations with no luck.
 

Think about that for a minute.

If Squid is never allowed to *ask* for credentials. How will it get them?

Do you really want the browser actively broadcasting usernames and
passwords in trivially decrypted format out into the network regardless
of where its connecting to?

You can block Squid actively requesting credentials by adding  all to
the end of the http_access line(s) that would otherwise end with
ncsa_users ACL check. However, that will only cause the browser to
display an error page. Access Denied, end of transaction, full stop,
dont try again.



Remember that the popup is *not* part of HTTP messaging nor the HTTP
level authentication. It is purely a browser internal mechanism for
locating credentials.

407 is a perfectly normal HTTP operation. A working browser would always
answer Squid 407 queries by sending the MDM configured cerdentials, with
*zero* user involvement.

I suspect that perhapse your MDM system is tying the credentials to an
IPv4 address, and the iPhone using IPv6 on some traffic?
 Or maybe the browser really is braindead and forgetting how to lookup
the credentials.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-22 Thread Amos Jeffries
On 22/07/2015 12:44 p.m., Alex Wu wrote:
 it depends on how you set up squid, and where the connection is broken. The 
 patch addessed the issue that occured using sslbump and content redirect 
 together.
 

I'd like some clarification what the exact problem symptoms are please.

AFAIK, both redirect and re-write actions happen a relatively long time
*after* the bumping TLS handshakes to server are completed. Its far too
late to send the pre-handshake SNI data to the server.

I can see this change as affecting reverse-proxy / CDN configurations
with TLS on both connections. But you said this was SSL-bumping, and
reverse-proxy configurations already have a cache_peer option to set the
internal domain name without re-write/redirect.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users