AW: [squid-users] AW: any chance to optimize squid3?

2013-02-12 Thread Fuhrmann, Marcel
Hello again,

i found out, that this delay comes from squid_ldap_group and not from 
squid_kerb_auth.
I thought it would be faster when I'm using Kerberos auth and ldap groupcheck:

auth_param negotiate children 10
auth_param negotiate keep_alive on
auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
external_acl_type checkgroup %LOGIN /usr/lib64/squid/squid_ldap_group -R -K -b 
dc=DOMAIN,dc=local -D ldap -w PASSWORD -f 
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,ou=User_Gruppen,dc=DOMAIN,dc=local))
 -h DOMAINCONTROLLER

instead of my old config:

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 20 startup=0 idle=1
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Domain Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
authenticate_cache_garbage_interval 10 seconds
authenticate_ttl 28800 seconds
external_acl_type nt_group ttl=5 children=5 %LOGIN 
/usr/lib/squid3/wbinfo_group.pl


What can I do? What's the best way to authorize an specific ldap group?

Thanks for help.

--
 Marcel


-Ursprüngliche Nachricht-
Von: Fuhrmann, Marcel [mailto:marcel.fuhrm...@lux.ag] 
Gesendet: Donnerstag, 7. Februar 2013 11:22
An: squid-users@squid-cache.org
Betreff: AW: [squid-users] AW: any chance to optimize squid3?

Hello,

at the moment some users are using my new proxy (with kerberos auth instead of 
NTLM). There is just one unlikely thing yet. First time browser starts (start 
page google) it takes several seconds till google page is loaded. When I 
continue browsing to another page, this delay isn't noticeable.  I suspect It 
has to do with the initial authentication. Is this normal or can I adjust some 
config?

This is my config for Kerberos:
auth_param negotiate program /usr/lib64/squid/squid_kerb_auth auth_param 
negotiate children 10 auth_param negotiate keep_alive on

Thanks for helping me.



-Ursprüngliche Nachricht-
Von: Fuhrmann, Marcel [mailto:marcel.fuhrm...@lux.ag]
Gesendet: Samstag, 2. Februar 2013 11:04
An: squid-users@squid-cache.org
Betreff: AW: [squid-users] AW: any chance to optimize squid3?

Hi Amos,

finally i've configured Kerberos auth and ldap group check. In a few weeks I 
will report if the bottlenecks are eliminated. 

This is now my config:

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth auth_param 
negotiate children 10 auth_param negotiate keep_alive on external_acl_type 
checkgroup %LOGIN /usr/lib64/squid/squid_ldap_group -R -K -b 
dc=DOMAIN,dc=local -D ldap -w PASSWORD -f 
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,ou=UserGroups,dc=DOMAIN,dc=local))
 -h DOMAINCONTROLLER .
(snip)
.
acl Terminalserver src 10.4.1.51-10.4.1.75 acl AUTH proxy_auth REQUIRED acl 
InternetGroup external checkgroup internet .
(snip)
.
http_access deny !AUTH
http_access allow InternetGroup Terminalserver http_access deny Terminalserver .
(snip)
.


Thanks for help.



Amos Jeffries wrote:

 The big issues you have are:
 * using NTLM. This seriously caps the proxy performance and capacity. Each 
 new TCP connection (~30 per second from your graphs) requires at least two 
 full HTTP  reqesut/reply round trips just to authenticate before the actual 
 HTTP response can begin to be identified and fetched. 

 * using group to base access permissions. Like NTLM this caps the capacity of 
 your Squid. 
 
 * using a URL helper. Whether that is a big drag or not depends on what you 
 are using it for and whether Squid can do that faster by itself. 
 
 These are your big performance bottlenecks. Eliminating any of them will 
 speed up your proxy. BUT whether it is worth doing is up to you. 



AW: [squid-users] AW: any chance to optimize squid3?

2013-02-07 Thread Fuhrmann, Marcel
Hello,

at the moment some users are using my new proxy (with kerberos auth instead of 
NTLM). There is just one unlikely thing yet. First time browser starts (start 
page google) it takes several seconds till google page is loaded. When I 
continue browsing to another page, this delay isn't noticeable.  I suspect It 
has to do with the initial authentication. Is this normal or can I adjust some 
config?

This is my config for Kerberos:
auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on

Thanks for helping me.



-Ursprüngliche Nachricht-
Von: Fuhrmann, Marcel [mailto:marcel.fuhrm...@lux.ag] 
Gesendet: Samstag, 2. Februar 2013 11:04
An: squid-users@squid-cache.org
Betreff: AW: [squid-users] AW: any chance to optimize squid3?

Hi Amos,

finally i've configured Kerberos auth and ldap group check. In a few weeks I 
will report if the bottlenecks are eliminated. 

This is now my config:

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth auth_param 
negotiate children 10 auth_param negotiate keep_alive on external_acl_type 
checkgroup %LOGIN /usr/lib64/squid/squid_ldap_group -R -K -b 
dc=DOMAIN,dc=local -D ldap -w PASSWORD -f 
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,ou=UserGroups,dc=DOMAIN,dc=local))
 -h DOMAINCONTROLLER .
(snip)
.
acl Terminalserver src 10.4.1.51-10.4.1.75 acl AUTH proxy_auth REQUIRED acl 
InternetGroup external checkgroup internet .
(snip)
.
http_access deny !AUTH
http_access allow InternetGroup Terminalserver http_access deny Terminalserver .
(snip)
.


Thanks for help.



Amos Jeffries wrote:

 The big issues you have are:
 * using NTLM. This seriously caps the proxy performance and capacity. Each 
 new TCP connection (~30 per second from your graphs) requires at least two 
 full HTTP  reqesut/reply round trips just to authenticate before the actual 
 HTTP response can begin to be identified and fetched. 

 * using group to base access permissions. Like NTLM this caps the capacity of 
 your Squid. 
 
 * using a URL helper. Whether that is a big drag or not depends on what you 
 are using it for and whether Squid can do that faster by itself. 
 
 These are your big performance bottlenecks. Eliminating any of them will 
 speed up your proxy. BUT whether it is worth doing is up to you. 



AW: [squid-users] AW: any chance to optimize squid3?

2013-02-02 Thread Fuhrmann, Marcel
Hi Amos,

finally i've configured Kerberos auth and ldap group check. In a few weeks I 
will report if the bottlenecks are eliminated. 

This is now my config:

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
external_acl_type checkgroup %LOGIN /usr/lib64/squid/squid_ldap_group -R -K -b 
dc=DOMAIN,dc=local -D ldap -w PASSWORD -f 
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%g,ou=UserGroups,dc=DOMAIN,dc=local))
 -h DOMAINCONTROLLER
.
(snip)
.
acl Terminalserver src 10.4.1.51-10.4.1.75
acl AUTH proxy_auth REQUIRED
acl InternetGroup external checkgroup internet
.
(snip)
.
http_access deny !AUTH
http_access allow InternetGroup Terminalserver
http_access deny Terminalserver
.
(snip)
.


Thanks for help.



Amos Jeffries wrote:

 The big issues you have are:
 * using NTLM. This seriously caps the proxy performance and capacity. Each 
 new TCP connection (~30 per second from your graphs) requires at least two 
 full HTTP  reqesut/reply round trips just to authenticate before the actual 
 HTTP response can begin to be identified and fetched. 

 * using group to base access permissions. Like NTLM this caps the capacity of 
 your Squid. 
 
 * using a URL helper. Whether that is a big drag or not depends on what you 
 are using it for and whether Squid can do that faster by itself. 
 
 These are your big performance bottlenecks. Eliminating any of them will 
 speed up your proxy. BUT whether it is worth doing is up to you. 



AW: [squid-users] AW: any chance to optimize squid3?

2012-11-24 Thread Fuhrmann, Marcel
I forgot the new graph for this squid test server: 
http://ubuntuone.com/7No4l3bqLr9UNKZObaSsQc
It shows a null load situation. Nobody is using it actually (except me for 
testing purposes).


-Ursprüngliche Nachricht-
Von: Fuhrmann, Marcel [mailto:marcel.fuhrm...@lux.ag] 
Gesendet: Samstag, 24. November 2012 08:52
An: squid-users@squid-cache.org
Betreff: AW: [squid-users] AW: any chance to optimize squid3?

Hello Amos,

I've installed a test squid server. I'm using CentOS 6.3 (2GB Ram, 2 CPU, 
RAID10 for caching folder). I followed this guides:
http://serverfault.com/questions/66556/getting-squid-to-authenticate-with-kerberos-and-windows-2008-2003-7-xp
http://klaubert.wordpress.com/2008/01/09/squid-kerberos-authentication-and-ldap-authorization-in-active-directory/

And now i'm also using kerberos to authenticate to Windows 2008. And the best: 
It's working :-)

Here my config:

cache_mem 64 MB
cache_dir aufs /var/spool/squid 8000 256 256 cache_effective_user squid 
cache_mgr lux.supp...@lux.ag cache_replacement_policy heap LFUDA 
maximum_object_size 1000 KB maximum_object_size_in_memory 128 KB 
memory_replacement_policy heap GDSF error_directory 
/usr/share/squid/errors/de-de dns_nameservers 10.4.1.20

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth auth_param 
negotiate children 10 auth_param negotiate keep_alive on

acl snmplux snmp_community kj3v45hv345j23 #acl LAN src 10.4.1.0/24 10.2.1.0/24 
acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost 
dst 127.0.0.0/8 0.0.0.0/32 acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl AUTH proxy_auth REQUIRED

snmp_access allow snmplux localhost
http_access allow AUTH
#http_access allow LAN
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access deny all
htcp_access deny all

hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid cache_log /var/log/squid/cache.log 
squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i(/cgi-bin/|\?) 0  0%  0
refresh_pattern .   0   20% 4320
icp_port 0
http_port 3128
snmp_port 3401

Two questions about it:

How can I grant internet access to a ADS group called INTERNET. The example 
in the guide doesn't work for me. If this is working, I will switch all my 
users to this server and discard my old one. Then I'm able to test, if this 
config is more efficient.

Is there any regulation how to arrange a squid config? OK, the rules need a 
special order to work correct, but what about the rest?

Thanks for your help!

--
 Marcel



-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz]
Gesendet: Freitag, 16. November 2012 23:58
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] AW: any chance to optimize squid3?


The big issues you have are:
  * using NTLM. This seriously caps the proxy performance and capacity. 
Each new TCP connection (~30 per second from your graphs) requires at least two 
full HTTP reqesut/reply round trips just to authenticate before the actual HTTP 
response can begin to be identified and fetched.

* using group to base access permissions. Like NTLM this caps the capacity of 
your Squid.

* using a URL helper. Whether that is a big drag or not depends on what you are 
using it for and whether Squid can do that faster by itself.

These are your big performance bottlenecks. Eliminating any of them will speed 
up your proxy. BUT whether it is worth doing is up to you.

All of the evidence is (to my eye anyway) looking like NTLM being the cause of 
a temporary bandwidth flood around 13:30-13:45. Whether that is matching your 
report of slow is unknown. You should drop NTLM anyway if you can. It has 
officially been deprecated by MS and Kerberos is far more efficient and faster.


 From your graphs I note your peak traffic time of 13:15-13:45 shows a 
bandwidth peak of almost 10Mbps. I guess that these are the slow times your 
users are complaining about? - it is expected that things slow down when the 
bandwidth completely fills up although whether you are working off 10Mbps NIC 
is unknown. TCP graphs are showing an increase in the early part of the peak, 
and HTTP response rate peaks out in the second half. This is consistent with 
NTLM sucking up extra bandwidth authenticating new connections - first half of 
the peak is initial TCP setup + HTTP first

Re: AW: [squid-users] AW: any chance to optimize squid3?

2012-11-24 Thread Amos Jeffries

On 24/11/2012 8:52 p.m., Fuhrmann, Marcel wrote:

Hello Amos,

I've installed a test squid server. I'm using CentOS 6.3 (2GB Ram, 2 CPU, 
RAID10 for caching folder). I followed this guides:
http://serverfault.com/questions/66556/getting-squid-to-authenticate-with-kerberos-and-windows-2008-2003-7-xp
http://klaubert.wordpress.com/2008/01/09/squid-kerberos-authentication-and-ldap-authorization-in-active-directory/

And now i'm also using kerberos to authenticate to Windows 2008. And the best: 
It's working :-)

Here my config:

cache_mem 64 MB
cache_dir aufs /var/spool/squid 8000 256 256
cache_effective_user squid
cache_mgr lux.supp...@lux.ag
cache_replacement_policy heap LFUDA
maximum_object_size 1000 KB
maximum_object_size_in_memory 128 KB
memory_replacement_policy heap GDSF
error_directory /usr/share/squid/errors/de-de
dns_nameservers 10.4.1.20

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on

acl snmplux snmp_community kj3v45hv345j23
#acl LAN src 10.4.1.0/24 10.2.1.0/24
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl AUTH proxy_auth REQUIRED

snmp_access allow snmplux localhost
http_access allow AUTH


The above line means authenticated users (any group or username - just 
need to be accepted as valid by AD) are allowed unlimited access.



#http_access allow LAN
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all


So localhost is the only source which is protected from making CONNECT 
tunnels or requests to your manager interface.


I would move the allow AUTH line down next to the allow localhost line.


icp_access deny all
htcp_access deny all

hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i(/cgi-bin/|\?) 0  0%  0
refresh_pattern .   0   20% 4320
icp_port 0
http_port 3128
snmp_port 3401

Two questions about it:

How can I grant internet access to a ADS group called INTERNET. The example 
in the guide doesn't work for me. If this is working, I will switch all my users to this 
server and discard my old one. Then I'm able to test, if this config is more efficient.


You need to check for auth, then to check group membership.

Your current form only grants access immeidately for all authenticated 
users (allow AUTH).


To ensure users are authenticated, but not to grant access immediately 
use this form of auth test:

  http_access deny !AUTH

Which means reject anyone not authenticated (Squid will use an auth 
challenge in that rejection).
You then follow it with another access line sto tell Squid what to do 
with the users who *are* authenticated. Such as yoru group check


  external_acl_type groups ...
  acl groupCheck group INTERNET
  http_access allow groupCheck




Is there any regulation how to arrange a squid config? OK, the rules need a 
special order to work correct, but what about the rest?


Multi-line options order of those multiple lines is very important in 
how they act together. For the single line directives often placement is 
less important, it only matters for the directives which depend on 
something or something depends on them being set first (eg the 
delay_pools count directive needs to be set before configuraing what 
those pools are.  'acl' lines need to be set before any access control 
which uses them, cache_peer label being defined before cache_peer_access 
setup for it, etc, etc)



Amos


AW: [squid-users] AW: any chance to optimize squid3?

2012-11-23 Thread Fuhrmann, Marcel
Hello Amos,

I've installed a test squid server. I'm using CentOS 6.3 (2GB Ram, 2 CPU, 
RAID10 for caching folder). I followed this guides:
http://serverfault.com/questions/66556/getting-squid-to-authenticate-with-kerberos-and-windows-2008-2003-7-xp
http://klaubert.wordpress.com/2008/01/09/squid-kerberos-authentication-and-ldap-authorization-in-active-directory/

And now i'm also using kerberos to authenticate to Windows 2008. And the best: 
It's working :-)

Here my config:

cache_mem 64 MB
cache_dir aufs /var/spool/squid 8000 256 256
cache_effective_user squid
cache_mgr lux.supp...@lux.ag
cache_replacement_policy heap LFUDA
maximum_object_size 1000 KB
maximum_object_size_in_memory 128 KB
memory_replacement_policy heap GDSF
error_directory /usr/share/squid/errors/de-de
dns_nameservers 10.4.1.20

auth_param negotiate program /usr/lib64/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on

acl snmplux snmp_community kj3v45hv345j23
#acl LAN src 10.4.1.0/24 10.2.1.0/24
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl AUTH proxy_auth REQUIRED

snmp_access allow snmplux localhost
http_access allow AUTH
#http_access allow LAN
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access deny all
htcp_access deny all

hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i(/cgi-bin/|\?) 0  0%  0
refresh_pattern .   0   20% 4320
icp_port 0
http_port 3128
snmp_port 3401

Two questions about it:

How can I grant internet access to a ADS group called INTERNET. The example 
in the guide doesn't work for me. If this is working, I will switch all my 
users to this server and discard my old one. Then I'm able to test, if this 
config is more efficient.

Is there any regulation how to arrange a squid config? OK, the rules need a 
special order to work correct, but what about the rest?

Thanks for your help!

--
 Marcel



-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Freitag, 16. November 2012 23:58
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] AW: any chance to optimize squid3?


The big issues you have are:
  * using NTLM. This seriously caps the proxy performance and capacity. 
Each new TCP connection (~30 per second from your graphs) requires at least two 
full HTTP reqesut/reply round trips just to authenticate before the actual HTTP 
response can begin to be identified and fetched.

* using group to base access permissions. Like NTLM this caps the capacity of 
your Squid.

* using a URL helper. Whether that is a big drag or not depends on what you are 
using it for and whether Squid can do that faster by itself.

These are your big performance bottlenecks. Eliminating any of them will speed 
up your proxy. BUT whether it is worth doing is up to you.

All of the evidence is (to my eye anyway) looking like NTLM being the cause of 
a temporary bandwidth flood around 13:30-13:45. Whether that is matching your 
report of slow is unknown. You should drop NTLM anyway if you can. It has 
officially been deprecated by MS and Kerberos is far more efficient and faster.


 From your graphs I note your peak traffic time of 13:15-13:45 shows a 
bandwidth peak of almost 10Mbps. I guess that these are the slow times your 
users are complaining about? - it is expected that things slow down when the 
bandwidth completely fills up although whether you are working off 10Mbps NIC 
is unknown. TCP graphs are showing an increase in the early part of the peak, 
and HTTP response rate peaks out in the second half. This is consistent with 
NTLM sucking up extra bandwidth authenticating new connections - first half of 
the peak is initial TCP setup + HTTP first requests, HTTP peaks in a burst of 
challenge responses followed by both further HTTP as the clients send the 
handshake re-request and the actual HTTP response part of the cycle happens 
(client requests peak in both halves, out bandwidth peaks only in the second 
half with the larger responses involved).
  The HTTP response time quadruples (20ms - 80ms) in the 15 minutes
*before* these peaks occur and HIT ratio jumps by ~15

AW: [squid-users] AW: any chance to optimize squid3?

2012-11-19 Thread Fuhrmann, Marcel
Hi Amos,

thank you for your assessment. So I will try to fix these big issues first. 
I can remove the squidguard because my firewall can do this, too.
I will try to use Kerberos to authenticate to ADS.

--
 Marcel


-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Freitag, 16. November 2012 23:58
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] AW: any chance to optimize squid3?


The big issues you have are:
  * using NTLM. This seriously caps the proxy performance and capacity. 
Each new TCP connection (~30 per second from your graphs) requires at least two 
full HTTP reqesut/reply round trips just to authenticate before the actual HTTP 
response can begin to be identified and fetched.

* using group to base access permissions. Like NTLM this caps the capacity of 
your Squid.

* using a URL helper. Whether that is a big drag or not depends on what you are 
using it for and whether Squid can do that faster by itself.

These are your big performance bottlenecks. Eliminating any of them will speed 
up your proxy. BUT whether it is worth doing is up to you.

All of the evidence is (to my eye anyway) looking like NTLM being the cause of 
a temporary bandwidth flood around 13:30-13:45. Whether that is matching your 
report of slow is unknown. You should drop NTLM anyway if you can. It has 
officially been deprecated by MS and Kerberos is far more efficient and faster.


 From your graphs I note your peak traffic time of 13:15-13:45 shows a 
bandwidth peak of almost 10Mbps. I guess that these are the slow times your 
users are complaining about? - it is expected that things slow down when the 
bandwidth completely fills up although whether you are working off 10Mbps NIC 
is unknown. TCP graphs are showing an increase in the early part of the peak, 
and HTTP response rate peaks out in the second half. This is consistent with 
NTLM sucking up extra bandwidth authenticating new connections - first half of 
the peak is initial TCP setup + HTTP first requests, HTTP peaks in a burst of 
challenge responses followed by both further HTTP as the clients send the 
handshake re-request and the actual HTTP response part of the cycle happens 
(client requests peak in both halves, out bandwidth peaks only in the second 
half with the larger responses involved).
  The HTTP response time quadruples (20ms - 80ms) in the 15 minutes
*before* these peaks occur and HIT ratio jumps by ~15% over the peak traffic 
time. Consistent with a number of requests queueing at the authentication and 
group lookup stages.


I guess you have 10Mbps NIC, which could be part of the issue. Squid should be 
able to handle 50-100 req/sec despite NTLM and yet it is maxing out at 30. But 
9.7Mbps is a suspicious number for peak bandwidth. 
If your NIC are faster the above can all happen just the same on faster NIC due 
to processing time / response time for the helpers.  But on faster NIC I would 
expect to see higher bandwidth, TCP connection rates, and longer HTTP response 
times on the held up connection attempts.


Alternatively, after 16:30 and before 07:30 the TCP speeds are ramping up/down 
between the daily normal and overnight low traffic throughput. 
Squid is designed on a traffic-driven event model. We have some issues that 
when there is low enough traffic per-millisecond there are several components 
in Squid which start taking ~10ms pauses between handling events (to preserve 
against 100% CPU cycling checking for nothing) and can cause the response times 
to increase somewhat. If your reports are comign in from the earlybirds or late 
workers this is probably the reason.


On 16/11/2012 11:50 p.m., Fuhrmann, Marcel wrote:
 I have some performance graphs. Maybe they will help:
 http://ubuntuone.com/09XVmTzqmNAPgVDmc6h2yI


I see two other weird things.

  * FQDN cache is not storing DNS responses for some reason - that will cause a 
little bit of slowdown.

* the packets/sec graph at your peak traffic (10Mbps) is only showing
~500 packets. Do you have jumbo packets enabled on your network? If so it looks 
like you are getting bandwidth in packets of ~200KB which will cause some 
requests to be held up slightly behind other large packets. 
This is an effect which gets worse as the bandwidth pipes approach full. 
There is no matching congestion control ICMP traffic peak showing up - so I'm 
not sure of the accuracy there.

Amos