Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread John Joseph
Thanks Augustus for the email 

my information is 

---

[root@proxy squid]# squidclient -h 127.0.0.1 mgr:storedir
HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Sun, 04 Aug 2013 07:01:30 GMT
Content-Type: text/plain
Expires: Sun, 04 Aug 2013 07:01:30 GMT
Last-Modified: Sun, 04 Aug 2013 07:01:30 GMT
X-Cache: MISS from proxy
X-Cache-Lookup: MISS from proxy:3128
Via: 1.0 proxy (squid/3.1.10)
Connection: close

Store Directory Statistics:
Store Entries  : 13649421
Maximum Swap Size  : 58368 KB
Current Store Swap Size: 250112280 KB
Current Capacity   : 43% used, 57% free

Store Directory #0 (aufs): /opt/var/spool/squid
FS Block Size 4096 Bytes
First level subdirectories: 32
Second level subdirectories: 256
Maximum Size: 58368 KB
Current Size: 250112280 KB
Percent Used: 42.85%
Filemap bits in use: 13649213 of 16777216 (81%)
Filesystem Space in use: 264249784/854534468 KB (31%)
Filesystem Inodes in use: 13657502/54263808 (25%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 44.69 days

--

and my squid.conf is as 

--

always_direct allow all
cache_log   /opt/var/log/squid/cache.log
cache_access_log    /opt/var/log/squid/access.log

cache_swap_low 90
cache_swap_high 95

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 172.16.5.0/24    # RFC1918 possible internal network
acl localnet src 172.17.0.0/22    # RFC1918 possible internal network
acl localnet src 192.168.20.0/24    # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
always_direct allow local-servers

acl SSL_ports port 443
acl Safe_ports port 80        # http
acl Safe_ports port 21        # ftp
acl Safe_ports port 443        # https
acl Safe_ports port 70        # gopher
acl Safe_ports port 210        # wais
acl Safe_ports port 1025-65535    # unregistered ports
acl Safe_ports port 280        # http-mgmt
acl Safe_ports port 488        # gss-http
acl Safe_ports port 591        # filemaker
acl Safe_ports port 777        # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports


acl ipgroup src 172.16.5.1-172.16.5.255/32
acl ipgroup src 172.17.0.10-172.17.3.254/32
delay_pools 1
delay_class 1 2 
delay_parameters 1 256/386 14/18
delay_access 1 allow ipgroup
delay_access 1 deny all

http_access allow localnet
http_access allow localhost
http_access allow localnet
http_access allow localhost

http_access deny all

http_port 3128 transparent

hierarchy_stoplist cgi-bin ?

cache_dir aufs /opt/var/spool/squid 57 32 256

coredump_dir /opt/var/spool/squid


maximum_object_size 4 GB


refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire 
ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 
override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 
90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320

refresh_pattern ^ftp:        1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .        0    40%    40320


visible_hostname proxy

icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all

Guidance and advice requested

Thanks for the reply
Joseph John





- Original Message -
From: babajaga augustus_me...@yahoo.de
To: squid-users@squid-cache.org
Cc: 
Sent: Thursday, 1 August 2013 2:11 PM
Subject: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 
%  cache usage

The relatively low byte-hitrate gives the idea, that somewhere in your
squid.conf there is a limitation on the max. objects size to be cached. It
might be a good idea, to modify this one, to a larger value.
Caus it seems, you still have a lot of disk space available for caching.
So you might post your squid.conf here.

And, the output of
squidclient -h 127.0.0.1 mgr:storedir





--
View this message in context: 

Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread Amos Jeffries

On 4/08/2013 7:13 p.m., John Joseph wrote:

Thanks Augustus for the email

my information is

---

[root@proxy squid]# squidclient -h 127.0.0.1 mgr:storedir
HTTP/1.0 200 OK
Server: squid/3.1.10
Mime-Version: 1.0
Date: Sun, 04 Aug 2013 07:01:30 GMT
Content-Type: text/plain
Expires: Sun, 04 Aug 2013 07:01:30 GMT
Last-Modified: Sun, 04 Aug 2013 07:01:30 GMT
X-Cache: MISS from proxy
X-Cache-Lookup: MISS from proxy:3128
Via: 1.0 proxy (squid/3.1.10)
Connection: close

Store Directory Statistics:
Store Entries  : 13649421
Maximum Swap Size  : 58368 KB
Current Store Swap Size: 250112280 KB
Current Capacity   : 43% used, 57% free

Store Directory #0 (aufs): /opt/var/spool/squid
FS Block Size 4096 Bytes
First level subdirectories: 32
Second level subdirectories: 256
Maximum Size: 58368 KB
Current Size: 250112280 KB
Percent Used: 42.85%
Filemap bits in use: 13649213 of 16777216 (81%)
Filesystem Space in use: 264249784/854534468 KB (31%)
Filesystem Inodes in use: 13657502/54263808 (25%)
Flags: SELECTED
Removal policy: lru
LRU reference age: 44.69 days


You appear to have a good case there for upgrading to squid-3.2 or later 
and adding a rock cache_dir.


As you can see 81% of the Filemap is full. That is the file number codes 
Squid uses to internally reference stored objects. There is an absolute 
limit of 2^24 (or 1677216 in the above report). That will require an 
average object size of 35KB to fill your 557 GB storage area. Your 
details earlier said the mean object size actually stored so far was 18KB.


If you add a 50GB rock store alongside that UFS directory you should be 
able to double the cached object count.



--

and my squid.conf is as

--

always_direct allow all
cache_log   /opt/var/log/squid/cache.log
cache_access_log/opt/var/log/squid/access.log

cache_swap_low 90
cache_swap_high 95

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 172.16.5.0/24# RFC1918 possible internal network
acl localnet src 172.17.0.0/22# RFC1918 possible internal network
acl localnet src 192.168.20.0/24# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
always_direct allow local-servers


You are using always_direct allow all above. This line is never even 
being checked.


Also, always_direct has no meaning when there are no cache_peer lines to 
be overridden (which is the purpose of always_direct). You can remove 
both the always_direct lines to make things a bit faster.



acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports


acl ipgroup src 172.16.5.1-172.16.5.255/32
acl ipgroup src 172.17.0.10-172.17.3.254/32
delay_pools 1
delay_class 1 2
delay_parameters 1 256/386 14/18
delay_access 1 allow ipgroup
delay_access 1 deny all

http_access allow localnet
http_access allow localhost
http_access allow localnet
http_access allow localhost


You have doubled these rules up.


http_access deny all

http_port 3128 transparent


It is a good idea to always have 3128 listing for regular proxy traffic 
and redirecting the intercepted traffic to a separate port. The 
interception port is a private detail only relevant to teh NAT 
infrastructure doing the redirection and Squid. It can be firewalled to 
prevent any access directly to the port.




hierarchy_stoplist cgi-bin ?

cache_dir aufs /opt/var/spool/squid 57 32 256

coredump_dir /opt/var/spool/squid


maximum_object_size 4 GB


Can you try placing this above the cache_dir line please and see if it 
makes any difference?



refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire 
ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 
override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 
90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private


ignore-private and ignore-no-store are actually VERY bad ideas. No 
matter that it looks okay for innocent things like images and archives. 
Even those types 

Re: [squid-users] Squid cache siblings configuration

2013-08-04 Thread Amos Jeffries

On 4/08/2013 5:22 p.m., Tyler Sweet wrote:

Hello,

My second message to the mailing list :)

I've run into some problems when it comes to having two squid boxes
configured to be siblings to each other. I wasn't able to pull much
data about what happened, but I can sum it up for you here and then
try to replicate it back when I get access to my home lab again.

We're handling about 100-200 requests a second, mainly medium to small
files, with the occasional 2+GB game update or so. What we saw
happening, even under medium to low load (less than 50 users, probably
closer to 10-20 requests a second) was that when both squid servers
were set up with each other as a cache peer, one or more squid
processes would start to eat memory. Eventually, they would either eat
enough by themselves (22GB) or 4-7 together (each with 4 or more GB of
memory in RSS) to cause the server to run into out of memory
conditions and kill squid.


Which *exact* release versions have you observed this behaviour in?


Originally, I though this was caused by my self-compiled version of
Squid 3.4 on FreeBSD, and since I was low on time and had no time to
look into it further, I reloaded the servers to CentOS 6.4 and used
the repo listed on the squid site to install squid 3.3.8. The problem
persisted, and without any time to troubleshoot I simply disabled the
cache-peer configurations.

I'm pretty sure I've messed up the configuration somehow. Here are
what I think are the relevant config settings I've been using:
# Squid Boxen #
acl siblings src 172.16.1.91
acl siblings src 172.16.1.90 # Local server
# Cache Peers
htcp_port 4827
htcp_access allow siblings
htcp_clr_access allow siblings
htcp_access deny all
htcp_clr_access deny all
# Sibling
cache_peer 172.16.1.91 sibling 3128 4827 htcp
cache_peer_access 172.16.1.91 deny STEAM_CONTENT
cache_peer_access 172.16.1.91 allow all

Now, looking at the config I feel like I should probably have set the
siblings acl separately on both servers, to deny HTCP access from
looping around.


Yes in each config it should define only the IP of the other sibling.

NP: IIRC there is a bug still in the CLR handling causing Squid to loop 
CLR requests between the peers indefinitely. That should not eat up so 
much memory, but might eat bandwidth.



  But I don't know if that looping would have had this
affect or not, nor do I remember seeing anything in the logs about
looping happening. Can anyone offer some guidance on this? Is it
simply that I messed up the initial configuration?


The above part looks fine by itself.

The main thing in Squid that controls forwarding loops is via on. 
Which is the default. I assume you have not disabled that.
The backup you can add is a cache_peer_access deny line preventing 
sending to the peer requests that came from there in the first place 
(cache_peer_access 172.16.1.91 deny siblings).


Amos


[squid-users] reading swap.state file

2013-08-04 Thread Hussam Al-Tayeb
how I can parse swap.state file for inconsistencies? 
i.e. files referenced in swap.state but not in disk cache.
Or files on disk but not referenced in swap.state. it seems squid does not know 
how to shut down correctly if one of the users is viewing a youtube video.


[squid-users] Squid 3.2. and 3.3. under FreeBSD incompatible with windows?

2013-08-04 Thread lorenz_81
I have compiled squid with default settings. I have nothing changed in the 
squid.conf. Every download initiated under Windows that last longer than few 
seconds don't finish. 

Under Linux everything is running well. I have tried 4 different Windows 
clients. Disabled PF under FreeBSD. Recompiled with different settings. Changed 
the squid.conf. Nothing helps. 

What to do now?


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread babajaga
Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.

However, as an alternative to using rock, you can setup a second ufs/aufs
cache_dir.
(Especially, in case, you have a production system, I would suggest 2
ufs/aufs.) 

BUT, and this is valid for both alternatives, be careful then to avoid
double caching by applying
consistent limits on the size of cached objects.
Note, that there are several limits to be considered:
maximum_object_size_in_memory  xx KB
maximum_object_size yyy KB
minimum_object_size 0 KB
cache_dir aufs /var/cacheA/squid27 250 16 256 min-size=0 max-size=
cache_dir aufs /var/cacheB/squid27 250 16 256 min-size=zzz+1
max-size= KB

And, when doing this, you should use the newest squid release. Or good, old
2.7 :-)
Reason is, that there were a few squid versions 3.x, having a bug when
evaluating the combination of different limit options, with the consequence,
of not storing certain cachable objects on disk.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661428.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] helper output in logs

2013-08-04 Thread Alfredo Rezinovsky

El 02/08/13 18:03, Amos Jeffries escribió:

On 3/08/2013 3:37 a.m., Alfredo Rezinovsky wrote:
I'm using store_id_program and I want my program to ouput a Key/Value 
pair so the value goes in the logs


I tried with log and tag keys and using %[et and %[ea in 
LogFormat but it didn't worked.


There's a generic Key/Value handling or each helper needs a special 
support ?


See the new %note token. The intention is to make the keys sent back 
available as annotations, including custom ones from any helper.
It is not quite working for all helpers yet (assistance with that 
welcome), but should be working for the store-ID ones.



The %{my_key}note in logformat worked exactly as I wanted.
It does work with store-id keys

Thanks.




Re: [squid-users] helper output in logs

2013-08-04 Thread Eliezer Croitoru
On 08/04/2013 09:46 PM, Alfredo Rezinovsky wrote:
 El 02/08/13 18:03, Amos Jeffries escribió:
 On 3/08/2013 3:37 a.m., Alfredo Rezinovsky wrote:
 I'm using store_id_program and I want my program to ouput a Key/Value
 pair so the value goes in the logs

 I tried with log and tag keys and using %[et and %[ea in
 LogFormat but it didn't worked.

 There's a generic Key/Value handling or each helper needs a special
 support ?

 See the new %note token. The intention is to make the keys sent back
 available as annotations, including custom ones from any helper.
 It is not quite working for all helpers yet (assistance with that
 welcome), but should be working for the store-ID ones.

 The %{my_key}note in logformat worked exactly as I wanted.
 It does work with store-id keys
 
 Thanks.
 
 
Can you share the squid.conf output??
So others can see how it's being done properly?

Thanks,
Eliezer


Re: [squid-users] helper output in logs

2013-08-04 Thread Alfredo Rezinovsky

El 04/08/13 15:52, Eliezer Croitoru escribió:

On 08/04/2013 09:46 PM, Alfredo Rezinovsky wrote:

El 02/08/13 18:03, Amos Jeffries escribió:

On 3/08/2013 3:37 a.m., Alfredo Rezinovsky wrote:

I'm using store_id_program and I want my program to ouput a Key/Value
pair so the value goes in the logs

I tried with log and tag keys and using %[et and %[ea in
LogFormat but it didn't worked.

There's a generic Key/Value handling or each helper needs a special
support ?

See the new %note token. The intention is to make the keys sent back
available as annotations, including custom ones from any helper.
It is not quite working for all helpers yet (assistance with that
welcome), but should be working for the store-ID ones.


The %{my_key}note in logformat worked exactly as I wanted.
It does work with store-id keys

Thanks.



Can you share the squid.conf output??
So others can see how it's being done properly?


I'm trying to make a store_id_program with many plugins.

My store_id_program outputs

  OK store-id=http://STOREID/ whatever ... store-id-plugin=youtube
  OK store-id=http://STOREID/ whatever ... store-id-plugin=vimeo
  OK store-id=http://STOREID/ whatever ... 
store-id-plugin=facebookimages


I wanted the store-id-plugin to appear in the logs so I can make per 
plugin statistics.


I dont need user name in the logs and I want to keep my logs as standard 
as possible so I replaced %[un with %[{store-id-plugin}note in 
LogFormat configuration.


My logformat in squid.conf:

  logformat access %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru 
%[{store-id-plugin}note %Sh/%a %mt


Now I have the store-id-plugin name (or a - if no plugin) in the logs.
Here's two log line samples:

1375644685.928   2211 127.0.0.1 TCP_MISS/200 3438 GET 
http://profile.ak.fbcdn.net/hprofile-ak-prn2/0_000_0_q.jpg 
facebookimages HIER_DIRECT/209.0.0.1 image/jpeg
1375647306.276   1628 127.0.0.1 TCP_MISS/200 2763 GET 
http://query.yahooapis.com/v1/public/yql? - HIER_DIRECT/98.99.100.101 
application/json


You can try %notes in LogFormat and will see a long string containing 
ALL THE KEY-VALUE pairs in your logs.


(Some values were masked for privacy)

--
Alfrenovsky


Re: [squid-users] Squid 3.2. and 3.3. under FreeBSD incompatible with windows?

2013-08-04 Thread Amos Jeffries

On 5/08/2013 1:33 a.m., lorenz_81 wrote:

I have compiled squid with default settings. I have nothing changed in the 
squid.conf. Every download initiated under Windows that last longer than few 
seconds don't finish.

Under Linux everything is running well. I have tried 4 different Windows 
clients. Disabled PF under FreeBSD. Recompiled with different settings. Changed 
the squid.conf. Nothing helps.

What to do now?


3.2 and 3.3 do not support being _run_ on Windows. But what system the 
client uses is irrelevant, HTTP protocol works on all OS.


I think your problem is elsewhere. Most probably in ECN, MTU or 
window-scaling problems that are well known in using TCP between Windwos 
and other systems.


Amos


Re: [squid-users] reading swap.state file

2013-08-04 Thread Amos Jeffries

On 5/08/2013 1:11 a.m., Hussam Al-Tayeb wrote:

how I can parse swap.state file for inconsistencies?
i.e. files referenced in swap.state but not in disk cache.


swap.state is a journal of transactions. It includes references to 
operations that occured on old deleted files as part of its normal 
content. Squid handles any such files without the delete record 
automatically you do not need to worry about it.



Or files on disk but not referenced in swap.state. it seems squid does not know
how to shut down correctly if one of the users is viewing a youtube video.


What do you mean by that last one? If Squid is shutting down it waits 
shutdown_timeout for clients to finish up (a long video would not do so) 
then terminates all remaining client connections and stops. None of them 
get written to the log, and on restart the corrupted files will be 
overwritten.


Amos



Re: [squid-users] Squid cache siblings configuration

2013-08-04 Thread Tyler Sweet
I didn't have via defined in my squid.conf, so that should be default
on both servers.

On FreeBSD, I built 3.4.0.1 from the website. CentOS has 3.3.8. I'll
look into labbing this and seeing if I can reproduce the error and
then try working with the cache_peer_access rules to deny the local
box.

I'll let you know what I find!

-Tyler Sweet


Re: [squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread Amos Jeffries

On 5/08/2013 4:17 a.m., babajaga wrote:

Like I guessed already in my first reply, you are reaching the max limit of
cached objects in your cache_dir, like Amos explained. Which will render
ineffective part of your disk space.

However, as an alternative to using rock, you can setup a second ufs/aufs
cache_dir.
(Especially, in case, you have a production system, I would suggest 2
ufs/aufs.)


Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to 
the limits of the hardware. If you place 2 UFS based cache_dir on one 
physical disk spindle with lots of small objects they will fight for I/O 
resources with the result of dramatic reduction in both performance and 
disk lifetime relative to the traffic speed. Rock and COSS cache types 
avoid this by aggregating the small objects into large blocks which get 
read/write all at once.



BUT, and this is valid for both alternatives, be careful then to avoid
double caching by applying
consistent limits on the size of cached objects.


You won't get double caching within one proxy process. This only happens 
with multiple proxies or with SMP workers.

Note, that there are several limits to be considered:
maximum_object_size_in_memory  xx KB
maximum_object_size yyy KB
minimum_object_size 0 KB
cache_dir aufs /var/cacheA/squid27 250 16 256 min-size=0 max-size=
cache_dir aufs /var/cacheB/squid27 250 16 256 min-size=zzz+1
max-size= KB

And, when doing this, you should use the newest squid release. Or good, old
2.7 :-)
Reason is, that there were a few squid versions 3.x, having a bug when
evaluating the combination of different limit options, with the consequence,
of not storing certain cachable objects on disk.


That bug still exists, the important thing until that gets fixed is to 
place maximum_object_size above the cache_dir options.


Amos


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-08-04 Thread babajaga
Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to
the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic reduction in both performance and
disk lifetime relative to the traffic speed. Rock and COSS cache types
avoid this by aggregating the small objects into large blocks which get
read/write all at once. 

Really that bad ? 
As squid does not use raw disk-I/O for any cache type, OS/FS-specific
buffering/merging/delayed writes will always happen, before cache objects
are really written to disk. So, a-priori I would not see a serious
difference between ufs/aufs/rock/COSS on same spindle for the same object
size (besides some overhead for creation of FS-info for ufs/aufs). COSS is
out-of-favour anyway, because of being unstable, wright ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661436.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] kerberos ERROR: gss_accept_sec_context() failed: Unspecified GSS failure

2013-08-04 Thread Glenn groves
Hi All,



I have been setting up a new proxy, it needs to have Kerberos auth so
that the users on the domain do not get prompted for a password - but
are authenticated and this is to show in the logs. Sorry for the
formatting, I tried using the bold and embed tags but they did not
work



It does not work for windows 7, windows 8 or windows 2008



I have it working when I try from a windows 2003 OS, and can see the
auth occurring in the logs:



D1jAEc= u...@domain.com.au

2013/08/05 11:48:16| squid_kerb_auth: INFO: User u...@domain.com.au
authenticated



However from a windows 7 or windows 8 PC, the authentication does not
complete and instead there is an error:



2013/08/05 11:48:31| squid_kerb_auth: ERROR: gss_accept_sec_context()
failed: Unspecified GSS failure.  Minor code may provide more
information.

2013/08/05 11:48:31| authenticateNegotiateHandleReply: Error
validating user via Negotiate. Error returned 'BH
gss_accept_sec_context() failed: Unspecified GSS failure.  Minor code
may provide more information.



== /var/log/squid/cache.log ==

2013/08/05 11:48:31| squid_kerb_auth: INFO: User not authenticated





Below is some information on the configuration:



We are running 3 x 2008R2 domain controllers and 1 x 2003 domain
controller, thus the domain mode is set to 2003.



The krb5.conf file contains:



[logging]

default = FILE:/var/log/krb5libs.log

kdc = FILE:/var/log/krb5kdc.log

admin_server = FILE:/var/log/kadmind.log



[libdefaults]

default_realm = MYDOMAIN.COM.AU

dns_lookup_kdc = false

dns_lookup_realm = false

ticket_lifetime = 24h

default_keytab_name = /etc/squid/PROXY.keytab

forwardable = true



; Note, because we have a 2003 domain controller, I have the 2003
uncommented below not the 2008 with AES

; for Windows 2003

default_tgs_enctypes = rc4-hmac des-cbc-crc des-cbc-md5

default_tkt_enctypes = rc4-hmac des-cbc-crc des-cbc-md5

permitted_enctypes = rc4-hmac des-cbc-crc des-cbc-md5



; for Windows 2008 with AES

;default_tgs_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac
des-cbc-crc des-cbc-md5

;default_tkt_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac
des-cbc-crc des-cbc-md5

;permitted_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc
des-cbc-md5



[realms]

MYDOMAIN.COM.AU = {

kdc = kdc1.mydomain.com.au

kdc = kdc2.mydomain.com.au

kdc = kdc3.mydomain.com.au

kdc = kdc4.mydomain.com.au

admin_server = kdc1.mydomain.com.au

default_domain = mydomain.com.au

}



[domain_realm]

.mydomain.com.au = MYDOMAIN.COM.AU

mydomain.com.au = MYDOMAIN.COM.AU



The squid.conf contains the following custom settings:



auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -i -d -s
HTTP/proxy.mydoamin.com.au

auth_param negotiate children 10

auth_param negotiate keep_alive on

auth_param basic credentialsttl 2 hours

acl ad_auth proxy_auth REQUIRE

http_access allow ad_auth

http_access allow localnet



(Note: I would like to get rid of the http_access allow localnet, but
even on 2003 when the auth works - internet access is denied without
this line)



My /etc/sysconfig/squid file has the following custom lines:



KRB5_KTNAME=/etc/squid/PROXY.keytab

export KRB5_KTNAME



when I ran this command, the keytab was generated successfully:



msktutil -c -b CN=COMPUTERS -s HTTP/proxy.mydomain.com.au -h
proxy.mydomain.com.au -k /etc/squid/PROXY.keytab --computer-name
PROXYK --upn HTTP/proxy.mydomain.com.au --server dc1.mydomain.com.au
--verbose



the permissions on the keytab are below which should be fine:

-rw-rw-rw-. 1 root root 1430 Aug  5 08:33 /etc/squid/PROXY.keytab



In Summary, the fact windows 2003 works and gets authenticated shows
to me that Kerberos is working, why wont windows 2008, 7 or 8 works?



Thanks,



Glenn