[squid-users] Cache everything for 15 minutes - nothing more or less (offline_mode + refresh_pattern?)

2015-10-26 Thread Kris Linquist
I’ve got a bunch of services connecting to third party APIs.  Many of these 
APIs register hits even if squid reaches out to the service to check the HTTP 
headers (I verified that  hits with the log  TCP_REFRESH_UNMODIFIED/200  are 
being counted against me with the third party service).

offline_mode on  seems to never return fresh results, no matter what my 
refresh_pattern is.   I want fresh results every 15 minutes, otherwise return 
HIT from memory and don’t reach out.  Is this possible?Perhaps I just 
haven’t found the right combination of refresh_pattern options.  

Thanks,
Kris___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


RE: [squid-users] NTLM Authenticator Statistics 3.3.5

2013-09-30 Thread Kris Glynn
They are all Vmware VM's - 2VCPU and 4GB of RAM each - they authenticate, 
authorize (based on wbinfo AD group lookups) and cache and yes you are correct 
in saying adding another squid instance is as easy as cloning the VM and adding 
to the F5 pool.

Each Datacenter is within 8km's of the majority of uses, we have 1Gig uplink 
from the users to proxies.

Getting back to the initial problem.. I first discovered it when users reported 
they couldn't authenticate to one of the proxies, when I logged into the squid 
server the cache.log was full of errors like WARNING: external ACL 
'ldap_group' queue overload. Using stale result - when I dug further I noticed 
at the top of the cache.log (after the nightly squid -k rotate) it had entries 
such as ipcCreate: fork: (12) Cannot allocate memory WARNING: Cannot run 
'/usr/bin/ntlm_auth' process. And helperOpenServers: Starting 1/50 
'ext_wbinfo_group_acl' processes ipcCreate: fork: (12) Cannot allocate memory 
WARNING: Cannot run '/usr/lib64/squid/ext_wbinfo_group_acl' process.  - it 
seemed odd to me that a squid -k rotate would either restart/stop/start 
helpers. Shouldn't a squid -k rotate leave helpers alone when it's just 
instructing squid to rotate the logs?

2013/09/24 00:00:23 kid1| storeDirWriteCleanLogs: Starting...
2013/09/24 00:00:28 kid1| 65536 entries written so far.
2013/09/24 00:00:35 kid1|131072 entries written so far.
2013/09/24 00:00:40 kid1|196608 entries written so far.
2013/09/24 00:00:45 kid1|262144 entries written so far.
2013/09/24 00:00:48 kid1|327680 entries written so far.
2013/09/24 00:00:51 kid1|393216 entries written so far.
2013/09/24 00:00:55 kid1|458752 entries written so far.
2013/09/24 00:00:59 kid1|524288 entries written so far.
2013/09/24 00:01:02 kid1|589824 entries written so far.
2013/09/24 00:01:05 kid1|655360 entries written so far.
2013/09/24 00:01:07 kid1|720896 entries written so far.
2013/09/24 00:01:08 kid1|   Finished.  Wrote 759594 entries.
2013/09/24 00:01:08 kid1|   Took 44.19 seconds (17189.28 entries/sec).
2013/09/24 00:01:08 kid1| logfileRotate: stdio://var/log/squid/access.log
2013/09/24 00:01:08 kid1| Rotate log file stdio://var/log/squid/access.log
2013/09/24 00:01:08 kid1| helperOpenServers: Starting 10/60 'ntlm_auth' 
processes
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory

When I looked into it further that's when I noticed all of the old 
/usr/bin/ntlm_auth processes still running from months back and 
/usr/bin/squidclient -p 8080 mgr:ntlmauthenticator reporting that 140+ were in 
shutting down state - stopping squid did not stop all of the ntlm_auth 
processes so I had to killall -9 ntlm_auth and then start squid back up again.


Regards

- Kris Glynn: (07) 3295 3987 - 0434602997

-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Monday, 30 September 2013 3:43 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] NTLM Authenticator Statistics 3.3.5

Hey Kris,

Well it's not such a small setup after all.
I do not know what is the size of these machines but I assume they have more 
then just one single core to work fine.
I am not sure about the next suggestion yet since I do not know if the proxy is 
for cache also or just plain authentication.
I can assume that these machines can be configured for SMP or mulch-instances 
on the same machine.
since you do have the F5 in place adding another so called instance of squid 
is only a matter of adding another lan IP to the squid machine and the IP to 
the F5.
it can balance the traffic in the process level a bit more then you are might 
be doing now.
it's not rocket science since lots of information is missing.

A small question:
The mentioned problem is in the period of these 10 days and the service is just 
reviving itself each time?? like in the logs?
The network distance between the clients and the DATACENTER since it's critical 
for smooth operation..
Notice that each authentication takes up some traffic so a keep_alive is better 
to be used to lower the network load of it.

Let say the server is getting 200 requests in one peak of load it means
200 incoming FD then 200 stdin\out operations 200 new connections towards the 
auth server\service, about 200 new outgoing connections in the case of a non 
cached object..
You can imagine what is the load on the servers if there is 3k requests per 
minute..

Eliezer

On 09/30/2013 08:23 AM, Kris Glynn wrote:
 Hi Eliezer,

 I am using 60 because it seemed to me that I needed that many. I am actually 
 running 4 x squid 3.3.5 - two in each data center. They are distributed by a 
 browser PAC file and each of the two in each data center are load balanced by 
 a Bigip F5 Load balancer. The PAC file points at the 2 x F5 Vips.

 As for keepalive, no reason

RE: [squid-users] NTLM Authenticator Statistics 3.3.5

2013-09-30 Thread Kris Glynn
Thanks Amos, that explains helper activity in the cache.log around rotate time.

When the problem occurred I didn't run a mgr:ntlmauthenticators report but on 
one of the proxies just now it has 77 shutting down state and report is here - 
http://pastebin.com/jhaFeW9H



Regards

- Kris Glynn: (07) 3295 3987 - 0434602997

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Monday, 30 September 2013 5:17 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] NTLM Authenticator Statistics 3.3.5

On 30/09/2013 7:26 p.m., Kris Glynn wrote:
 Getting back to the initial problem.. I first discovered it when users 
 reported they couldn't authenticate to one of the proxies, when I logged into 
 the squid server the cache.log was full of errors like WARNING: external ACL 
 'ldap_group' queue overload. Using stale result - when I dug further I 
 noticed at the top of the cache.log (after the nightly squid -k rotate) it 
 had entries such as ipcCreate: fork: (12) Cannot allocate memory WARNING: 
 Cannot run '/usr/bin/ntlm_auth' process. And helperOpenServers: Starting 
 1/50 'ext_wbinfo_group_acl' processes ipcCreate: fork: (12) Cannot allocate 
 memory WARNING: Cannot run '/usr/lib64/squid/ext_wbinfo_group_acl' process.  
 - it seemed odd to me that a squid -k rotate would either restart/stop/start 
 helpers. Shouldn't a squid -k rotate leave helpers alone when it's just 
 instructing squid to rotate the logs?

The helpers are logging to cache.log via stderr. They need to be restarted to 
connect to the new cache.log once it has been rotated.

What does the mgr:ntlmauthenticators report show about the NTLM helpers when 
this is going on?

Amos
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


RE: [squid-users] NTLM Authenticator Statistics 3.3.5

2013-09-30 Thread Kris Glynn
Thanks, I will look at upgrading but these are Production servers and I notice 
quite a few changes from 3.3.x to 3.4 so I might need to do something about it 
in the meantime.

My idea of a fix is the following to perhaps run every 48hours...

for pid in `/usr/bin/squidclient -p 8080 mgr:ntlmauthenticator |grep RS |awk 
'{print $3}'`; do kill $pid; done

Am I correct in saying that I can kill any pid with flag RS from the 
mgr:ntlmauthenticator output?


Regards

- Kris Glynn: (07) 3295 3987 - 0434602997

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Monday, 30 September 2013 6:00 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] NTLM Authenticator Statistics 3.3.5

On 30/09/2013 8:26 p.m., Kris Glynn wrote:
 Thanks Amos, that explains helper activity in the cache.log around rotate 
 time.

 When the problem occurred I didn't run a mgr:ntlmauthenticators report
 but on one of the proxies just now it has 77 shutting down state and
 report is here - http://pastebin.com/jhaFeW9H



 Regards

 - Kris Glynn: (07) 3295 3987 - 0434602997

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, 30 September 2013 5:17 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] NTLM Authenticator Statistics 3.3.5

 On 30/09/2013 7:26 p.m., Kris Glynn wrote:
 Getting back to the initial problem.. I first discovered it when users 
 reported they couldn't authenticate to one of the proxies, when I logged 
 into the squid server the cache.log was full of errors like WARNING: 
 external ACL 'ldap_group' queue overload. Using stale result - when I dug 
 further I noticed at the top of the cache.log (after the nightly squid -k 
 rotate) it had entries such as ipcCreate: fork: (12) Cannot allocate memory 
 WARNING: Cannot run '/usr/bin/ntlm_auth' process. And helperOpenServers: 
 Starting 1/50 'ext_wbinfo_group_acl' processes ipcCreate: fork: (12) Cannot 
 allocate memory WARNING: Cannot run '/usr/lib64/squid/ext_wbinfo_group_acl' 
 process.  - it seemed odd to me that a squid -k rotate would either 
 restart/stop/start helpers. Shouldn't a squid -k rotate leave helpers alone 
 when it's just instructing squid to rotate the logs?
 The helpers are logging to cache.log via stderr. They need to be restarted to 
 connect to the new cache.log once it has been rotated.

 What does the mgr:ntlmauthenticators report show about the NTLM helpers when 
 this is going on?

Okay this looks like you are hitting bug 3643. Where Safari (and any other 
clients behaving the same) could cause the helpers to get stuck in R / Reserved 
state.

This is fixed in 3.4, but unfortuately the fix requires a few background design 
changes so is not in 3.3. Are you able to use the latest daily snapshot of 3.4 
(labeled r12997 or later).

Amos
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] NTLM Authenticator Statistics 3.3.5

2013-09-29 Thread Kris Glynn
Hi,

I've noticed after a while the number of /usr/bin/ntlm_auth processes in 
shutting down state tends to increase and never actually shutdown/decrease.

It is configured like so..

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60 startup=15 idle=10
auth_param ntlm keep_alive off

 I've found an occurrence where a squid -k rotate was performed (performed 
daily via cron) and helperOpenServers tried to start processes and logged the 
below. When I logged into the squid server many many old ntlm_auth processes 
were running (over 140+ in shutting down state)

Is it normal for a squid -k rotate to spawn helpers? Should I be scheduling a 
squid restart to occur every x days and perhaps killall -9 ntlm_auth at the 
same time or does anyone have any suggestions as to why /usr/bin/ntlm_auth 
processes with Flags RS increase over time when not restarting squid?

2013/09/24 00:00:23 kid1| storeDirWriteCleanLogs: Starting...
2013/09/24 00:00:28 kid1| 65536 entries written so far.
2013/09/24 00:00:35 kid1|131072 entries written so far.
2013/09/24 00:00:40 kid1|196608 entries written so far.
2013/09/24 00:00:45 kid1|262144 entries written so far.
2013/09/24 00:00:48 kid1|327680 entries written so far.
2013/09/24 00:00:51 kid1|393216 entries written so far.
2013/09/24 00:00:55 kid1|458752 entries written so far.
2013/09/24 00:00:59 kid1|524288 entries written so far.
2013/09/24 00:01:02 kid1|589824 entries written so far.
2013/09/24 00:01:05 kid1|655360 entries written so far.
2013/09/24 00:01:07 kid1|720896 entries written so far.
2013/09/24 00:01:08 kid1|   Finished.  Wrote 759594 entries.
2013/09/24 00:01:08 kid1|   Took 44.19 seconds (17189.28 entries/sec).
2013/09/24 00:01:08 kid1| logfileRotate: stdio://var/log/squid/access.log
2013/09/24 00:01:08 kid1| Rotate log file stdio://var/log/squid/access.log
2013/09/24 00:01:08 kid1| helperOpenServers: Starting 10/60 'ntlm_auth' 
processes
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| helperOpenServers: Starting 1/10 'ntlm_auth' processes
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2013/09/24 00:01:08 kid1| helperOpenServers: Starting 1/50 
'ext_wbinfo_group_acl' processes
2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory

mgr:ntlmauthenticator

NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number active: 40 of 60 (77 shutting down)
requests sent: 9021339
replies received: 9021339
queue length: 0
avg service time: 0 msec


Below is output from mgr:info at the same time the above mgr:ntlmauthenticator 
was run..

Squid Object Cache: Version 3.3.5
Start Time: Wed, 18 Sep 2013 04:48:06 GMT
Current Time:   Mon, 30 Sep 2013 03:50:02 GMT
Connection information for squid:
Number of clients accessing cache:  3540
Number of HTTP requests received:   47586765
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   2763.2
Average ICP messages per minute since start:0.0
Select loop called: 1816815750 times, 0.569 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 13.2%, 60min: 17.0%

RE: [squid-users] NTLM Authenticator Statistics 3.3.5

2013-09-29 Thread Kris Glynn
Hi Eliezer,

I am using 60 because it seemed to me that I needed that many. I am actually 
running 4 x squid 3.3.5 - two in each data center. They are distributed by a 
browser PAC file and each of the two in each data center are load balanced by a 
Bigip F5 Load balancer. The PAC file points at the 2 x F5 Vips.

As for keepalive, no reason that it is off, I will turn it on and see how it 
goes. Also, Kerberos isn't far off, it's implemented and tested running through 
the F5 load balancer so I just have to enable it. My Test environment is 
running squid 3.3.9 and Kerberos works well.

Each of the 4 proxies have been up for 10days without a restart and averages 
around..

3000 request/per min (/usr/bin/squidclient -p 8080 mgr:info | grep HTTP 
requests per minute)
3500 clients accessing cache (/usr/bin/squidclient -p 8080 mgr:info | grep 
Number of clients accessing cache)
2500 open files (/usr/bin/squidclient -p 8080 mgr:info | grep Number of file 
desc currently in use)
600 usernames in NTLM username cache (/usr/bin/squidclient mgr:username_cache 
|grep AUTH | wc -l)

-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Monday, 30 September 2013 2:40 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] NTLM Authenticator Statistics 3.3.5

Hey Kris,

I am just wondering why do you nedd 60 children at all??
I am not sure what is the reason for what you are seeing but you need to make 
sure that all squid instances are off.
If you can test it and shutdown the squid instance and all subprocess that are 
forked.. and then on a clean startup see the cache.log..
it will give more info.
I would ask why do not use keep_alive?? it is there for a reason..
if it's such a loaded system I would upper the startup from 15 to 30 and the 
idle to 15.. and would try to use keep_alive on.

if you want to make sure about the ntlm_auth I would say that you can add a 
debug flag but it will probably will flood the logs..

A kerberous migration is possible??
since it's a 2.5 compatible I assume it's not that simple?

Eliezer

On 09/30/2013 07:07 AM, Kris Glynn wrote:
 Hi,

 I've noticed after a while the number of /usr/bin/ntlm_auth processes in 
 shutting down state tends to increase and never actually shutdown/decrease.

 It is configured like so..

 auth_param ntlm program /usr/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 60 startup=15 idle=10 auth_param ntlm
 keep_alive off

  I've found an occurrence where a squid -k rotate was performed
 (performed daily via cron) and helperOpenServers tried to start
 processes and logged the below. When I logged into the squid server
 many many old ntlm_auth processes were running (over 140+ in shutting
 down state)

 Is it normal for a squid -k rotate to spawn helpers? Should I be scheduling a 
 squid restart to occur every x days and perhaps killall -9 ntlm_auth at the 
 same time or does anyone have any suggestions as to why /usr/bin/ntlm_auth 
 processes with Flags RS increase over time when not restarting squid?

 2013/09/24 00:00:23 kid1| storeDirWriteCleanLogs: Starting...
 2013/09/24 00:00:28 kid1| 65536 entries written so far.
 2013/09/24 00:00:35 kid1|131072 entries written so far.
 2013/09/24 00:00:40 kid1|196608 entries written so far.
 2013/09/24 00:00:45 kid1|262144 entries written so far.
 2013/09/24 00:00:48 kid1|327680 entries written so far.
 2013/09/24 00:00:51 kid1|393216 entries written so far.
 2013/09/24 00:00:55 kid1|458752 entries written so far.
 2013/09/24 00:00:59 kid1|524288 entries written so far.
 2013/09/24 00:01:02 kid1|589824 entries written so far.
 2013/09/24 00:01:05 kid1|655360 entries written so far.
 2013/09/24 00:01:07 kid1|720896 entries written so far.
 2013/09/24 00:01:08 kid1|   Finished.  Wrote 759594 entries.
 2013/09/24 00:01:08 kid1|   Took 44.19 seconds (17189.28 entries/sec).
 2013/09/24 00:01:08 kid1| logfileRotate:
 stdio://var/log/squid/access.log
 2013/09/24 00:01:08 kid1| Rotate log file
 stdio://var/log/squid/access.log
 2013/09/24 00:01:08 kid1| helperOpenServers: Starting 10/60
 'ntlm_auth' processes
 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
 2013/09

RE: [squid-users] kerberos keytab

2013-08-20 Thread Kris Glynn
Just curious.. what conditions might occur that would need the keytab updated?

I've been running Kerberos auth squid for 6+ months now and have not had to 
update the keytab ever.

Is this because the Active Directory account name (proxytest) I used to 
generate the keytab with has Password never expires

I generate with ktpass on the Windows 2008r2 KDC and then copy to squid 
directory..

ktpass.exe -princ HTTP/proxytest.company.internal@COMPANY.INTERNAL -mapuser 
COMPANY\proxytest -crypto rc4-hmac-nt -ptype KRB5_NT_PRINCIPAL +rndpass -out 
HTTP.keytab

This has worked well for me.



-Original Message-
From: Carlos Defoe [mailto:carlosde...@gmail.com]
Sent: Tuesday, 20 August 2013 7:12 AM
To: hel...@hullen.de
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] kerberos keytab

thanks, Helmut.

i made one script to check the file change and run squid -k reconfigure.

i'll wait till next change to see if it works correctly.

thank you


On Mon, Aug 19, 2013 at 2:11 PM, Helmut Hullen hul...@t-online.de wrote:
 Hallo, Carlos,

 Du meintest am 19.08.13:

 What is the best strategy to use a keytab file within multiple
 servers? By now i'm using a NFS share to export the keytab.
 Every day msktutil runs to update the file if necessary. The job is
 schedule in one server only.

 Also, after the update of the keytab file, is it necessary to reload
 squid?

 I'd prefer incron for watching the keytab.

 Rule (pseudo code):
 if the original keytab is changed:
 copy it to the necessary places
 run squid -k reconfigure

 Viele Gruesse!
 Helmut
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


RE: [squid-users] Problem with compile squid 3.4.0.1 on RHEL6 x64

2013-07-31 Thread Kris Glynn
-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Thursday, 1 August 2013 5:42 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Problem with compile squid 3.4.0.1 on RHEL6 x64

Hey Kris and Hussam,

This issue exists probably due to the -fPIC flag that is being used by 
default by the RPMBUILD tool.
I posted a thing on it in the squid-dev list.
if you build squid using regular methods such as ./configure  make  make 
install
it would be built fine.
I would later compare the build environment on both Fedora And CentOS..
Since I am not RH engineer I cannot speak or design for them but I do design 
for those that needs that package up and running to make sure new bugs dosn't 
take effect on newer systems.

Eliezer

On 07/31/2013 02:35 PM, Hussam Al-Tayeb wrote:
 On Wednesday 31 July 2013 01:52:35 Kris Glynn wrote:
 Hi,

 I'm using a squid.spec from squid 3.3 to build 3.4.0.1 but it fails
 with
 /usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation
 R_X86_64_32 against `.rodata' can not be used when making a shared
 object; recompile with -fPIC ../snmplib/libsnmplib.a: could not read 
 symbols: Bad value

 libtool: link: g++ -I/usr/include/libxml2 -Wall -Wpointer-arith
 -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT -O2 -g
 -fPIC -fpie -march=native -std=c++0x .libs/squidS.o -fPIC -pie -Wl,-z
 -Wl,relro -Wl,-z -Wl,now -o squid AclRegs.o AuthReg.o
 AccessLogEntry.o AsyncEngine.o YesNoNone.o cache_cf.o CacheDigest.o
 cache_manager.o carp.o cbdata.o ChunkedCodingParser.o client_db.o
 client_side.o client_side_reply.o client_side_request.o BodyPipe.o
 clientStream.o CompletionDispatcher.o ConfigOption.o ConfigParser.o
 CpuAffinity.o CpuAffinityMap.o CpuAffinitySet.o debug.o delay_pools.o
 DelayId.o DelayBucket.o DelayConfig.o DelayPool.o DelaySpec.o
 DelayTagged.o DelayUser.o DelayVector.o NullDelayId.o
 ClientDelayConfig.o disk.o DiskIO/DiskIOModule.o DiskIO/ReadRequest.o
 DiskIO/WriteRequest.o dlink.o dns_internal.o DnsLookupDetails.o
 errorpage.o ETag.o event.o EventLoop.o external_acl.o
 ExternalACLEntry.o FadingCounter.o fatal.o fd.o fde.o filemap.o
 fqdncache.o ftp.o FwdState.o gopher.o helper.o HelperChildConfig.o
 HelperReply.o htcp.o http.o HttpHdrCc.o HttpHdrRange.o HttpHdrSc.o
 HttpHdrScTarget.o HttpHdrContRange.o HttpHeader.o HttpHeaderTools.o
 HttpBody.o HttpMsg.o HttpParser.o HttpReply.o RequestFlags.o
 HttpRequest.o HttpRequestMethod.o icp_v2.o icp_v3.o int.o internal.o
 ipc.o ipcache.o SquidList.o main.o MasterXaction.o mem.o mem_node.o
 MemBuf.o MemObject.o mime.o mime_header.o multicast.o neighbors.o
 Notes.o Packer.o Parsing.o pconn.o peer_digest.o
 peer_proxy_negotiate_auth.o peer_select.o peer_sourcehash.o
 peer_userhash.o redirect.o refresh.o RemovalPolicy.o send-announce.o
 MemBlob.o snmp_core.o snmp_agent.o SquidMath.o SquidNew.o stat.o
 StatCounters.o StatHist.o String.o StrList.o stmem.o store.o
 StoreFileSystem.o store_io.o StoreIOState.o store_client.o
 store_digest.o store_dir.o store_key_md5.o store_log.o
 store_rebuild.o store_swapin.o store_swapmeta.o store_swapout.o
 StoreMeta.o StoreMetaMD5.o StoreMetaSTD.o StoreMetaSTDLFS.o
 StoreMetaUnpacker.o StoreMetaURL.o StoreMetaVary.o StoreStats.o
 StoreSwapLogData.o Server.o SwapDir.o MemStore.o time.o tools.o
 tunnel.o unlinkd.o url.o URLScheme.o urn.o wccp.o wccp2.o whois.o
 wordlist.o LoadableModule.o LoadableModules.o
 DiskIO/DiskIOModules_gen.o err_type.o err_detail_type.o globals.o
 hier_code.o icp_opcode.o LogTags.o lookup_t.o repl_modules.o
 swap_log_op.o DiskIO/AIO/AIODiskIOModule.o
 DiskIO/Blocking/BlockingDiskIOModule.o
 DiskIO/DiskDaemon/DiskDaemonDiskIOModule.o
 DiskIO/DiskThreads/DiskThreadsDiskIOModule.o
 DiskIO/IpcIo/IpcIoDiskIOModule.o DiskIO/Mmapped/MmappedDiskIOModule.o
 -Wl,--export-dynamic  auth/.libs/libacls.a ident/.libs/libident.a
 acl/.libs/libacls.a acl/.libs/libstate.a auth/.libs/libauth.a
 libAIO.a libBlocking.a libDiskDaemon.a libDiskThreads.a libIpcIo.a
 libMmapped.a acl/.libs/libapi.a base/.libs/libbase.a
 ./.libs/libsquid.a ip/.libs/libip.a fs/.libs/libfs.a
 ipc/.libs/libipc.a mgr/.libs/libmgr.a anyp/.libs/libanyp.a
 comm/.libs/libcomm.a eui/.libs/libeui.a http/.libs/libsquid-http.a
 icmp/.libs/libicmp.a icmp/.libs/libicmp-core.a log/.libs/liblog.a
 format/.libs/libformat.a repl/libheap.a repl/liblru.a -lpthread
 -lcrypt adaptation/.libs/libadaptation.a esi/.libs/libesi.a
 ../lib/libTrie/libTrie.a -lxml2 -lexpat ssl/.libs/libsslsquid.a
 ssl/.libs/libsslutil.a snmp/.libs/libsnmp.a ../snmplib/libsnmplib.a
 ../lib/.libs/libmisccontainers.a ../lib/.libs/libmiscencoding.a
 ../lib/.libs/libmiscutil.a -lssl -lcrypto -lgssapi_krb5 -lkrb5
 -lk5crypto -lcom_err -L/root/rpmbuild/BUILD/squid-3.4.0.1/compat
 -lcompat-squid -lm -lnsl -lresolv -lcap -lrt -ldl
 -L/root/rpmbuild/BUILD/squid-3.4.0.1 -lltdl
 /usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation
 R_X86_64_32 against `.rodata' can not be used when making a shared
 object

[squid-users] Problem with compile squid 3.4.0.1 on RHEL6 x64

2013-07-30 Thread Kris Glynn
Hi,

I'm using a squid.spec from squid 3.3 to build 3.4.0.1 but it fails with 
/usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation R_X86_64_32 
against `.rodata' can not be used when making a shared object; recompile with 
-fPIC
../snmplib/libsnmplib.a: could not read symbols: Bad value

libtool: link: g++ -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -O2 -g -fPIC -fpie -march=native 
-std=c++0x .libs/squidS.o -fPIC -pie -Wl,-z -Wl,relro -Wl,-z -Wl,now -o squid 
AclRegs.o AuthReg.o AccessLogEntry.o AsyncEngine.o YesNoNone.o cache_cf.o 
CacheDigest.o cache_manager.o carp.o cbdata.o ChunkedCodingParser.o client_db.o 
client_side.o client_side_reply.o client_side_request.o BodyPipe.o 
clientStream.o CompletionDispatcher.o ConfigOption.o ConfigParser.o 
CpuAffinity.o CpuAffinityMap.o CpuAffinitySet.o debug.o delay_pools.o DelayId.o 
DelayBucket.o DelayConfig.o DelayPool.o DelaySpec.o DelayTagged.o DelayUser.o 
DelayVector.o NullDelayId.o ClientDelayConfig.o disk.o DiskIO/DiskIOModule.o 
DiskIO/ReadRequest.o DiskIO/WriteRequest.o dlink.o dns_internal.o 
DnsLookupDetails.o errorpage.o ETag.o event.o EventLoop.o external_acl.o 
ExternalACLEntry.o FadingCounter.o fatal.o fd.o fde.o filemap.o fqdncache.o 
ftp.o FwdState.o gopher.o helper.o HelperChildConfig.o HelperReply.o htcp.o 
http.o HttpHdrCc.o HttpHdrRange.o HttpHdrSc.o HttpHdrScTarget.o 
HttpHdrContRange.o HttpHeader.o HttpHeaderTools.o HttpBody.o HttpMsg.o 
HttpParser.o HttpReply.o RequestFlags.o HttpRequest.o HttpRequestMethod.o 
icp_v2.o icp_v3.o int.o internal.o ipc.o ipcache.o SquidList.o main.o 
MasterXaction.o mem.o mem_node.o MemBuf.o MemObject.o mime.o mime_header.o 
multicast.o neighbors.o Notes.o Packer.o Parsing.o pconn.o peer_digest.o 
peer_proxy_negotiate_auth.o peer_select.o peer_sourcehash.o peer_userhash.o 
redirect.o refresh.o RemovalPolicy.o send-announce.o MemBlob.o snmp_core.o 
snmp_agent.o SquidMath.o SquidNew.o stat.o StatCounters.o StatHist.o String.o 
StrList.o stmem.o store.o StoreFileSystem.o store_io.o StoreIOState.o 
store_client.o store_digest.o store_dir.o store_key_md5.o store_log.o 
store_rebuild.o store_swapin.o store_swapmeta.o store_swapout.o StoreMeta.o 
StoreMetaMD5.o StoreMetaSTD.o StoreMetaSTDLFS.o StoreMetaUnpacker.o 
StoreMetaURL.o StoreMetaVary.o StoreStats.o StoreSwapLogData.o Server.o 
SwapDir.o MemStore.o time.o tools.o tunnel.o unlinkd.o url.o URLScheme.o urn.o 
wccp.o wccp2.o whois.o wordlist.o LoadableModule.o LoadableModules.o 
DiskIO/DiskIOModules_gen.o err_type.o err_detail_type.o globals.o hier_code.o 
icp_opcode.o LogTags.o lookup_t.o repl_modules.o swap_log_op.o 
DiskIO/AIO/AIODiskIOModule.o DiskIO/Blocking/BlockingDiskIOModule.o 
DiskIO/DiskDaemon/DiskDaemonDiskIOModule.o 
DiskIO/DiskThreads/DiskThreadsDiskIOModule.o DiskIO/IpcIo/IpcIoDiskIOModule.o 
DiskIO/Mmapped/MmappedDiskIOModule.o -Wl,--export-dynamic  auth/.libs/libacls.a 
ident/.libs/libident.a acl/.libs/libacls.a acl/.libs/libstate.a 
auth/.libs/libauth.a libAIO.a libBlocking.a libDiskDaemon.a libDiskThreads.a 
libIpcIo.a libMmapped.a acl/.libs/libapi.a base/.libs/libbase.a 
./.libs/libsquid.a ip/.libs/libip.a fs/.libs/libfs.a ipc/.libs/libipc.a 
mgr/.libs/libmgr.a anyp/.libs/libanyp.a comm/.libs/libcomm.a eui/.libs/libeui.a 
http/.libs/libsquid-http.a icmp/.libs/libicmp.a icmp/.libs/libicmp-core.a 
log/.libs/liblog.a format/.libs/libformat.a repl/libheap.a repl/liblru.a 
-lpthread -lcrypt adaptation/.libs/libadaptation.a esi/.libs/libesi.a 
../lib/libTrie/libTrie.a -lxml2 -lexpat ssl/.libs/libsslsquid.a 
ssl/.libs/libsslutil.a snmp/.libs/libsnmp.a ../snmplib/libsnmplib.a 
../lib/.libs/libmisccontainers.a ../lib/.libs/libmiscencoding.a 
../lib/.libs/libmiscutil.a -lssl -lcrypto -lgssapi_krb5 -lkrb5 -lk5crypto 
-lcom_err -L/root/rpmbuild/BUILD/squid-3.4.0.1/compat -lcompat-squid -lm -lnsl 
-lresolv -lcap -lrt -ldl -L/root/rpmbuild/BUILD/squid-3.4.0.1 -lltdl
/usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation R_X86_64_32 
against `.rodata' can not be used when making a shared object; recompile with 
-fPIC
../snmplib/libsnmplib.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
libtool: link: rm -f .libs/squidS.o
make[3]: *** [squid] Error 1
make[3]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
make: *** [all-recursive] Error 1

Any ideas?





The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or 

RE: [squid-users] acl file for multiple users authentication by AD?

2013-07-11 Thread Kris Glynn
acl proxy_admins proxy_auth /etc/squid/proxyadminuser.txt

cat /etc/squid/proxyadminuser.txt
user-a
user-b
user-c
...

http_access allow proxy_admins


-Original Message-
From: Beto Moreno [mailto:pam...@gmail.com]
Sent: Friday, 12 July 2013 1:59 PM
To: squid-users@squid-cache.org
Subject: [squid-users] acl file for multiple users authentication by AD?

Very simple question, I authenticate squid vs AD using squid_ldap_auth, but 
exist a way to add a group of users to a file to apply my acl's?

restrict_users.acl
user-a
user-b
user-c
user-d

acl restrict_users path/restrict_users.acl?

I have seen how to do it with IP address but haven't seen with users, is 
possible?

Thanks.
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] RE: Diffence between NTLM in 2.6 compared to 3.3.5 - Citrix ?

2013-05-30 Thread Kris Glynn
-Original Message-
From: Kris Glynn
Sent: Wednesday, 29 May 2013 1:07 PM
To: squid-users@squid-cache.org
Subject: Diffence between NTLM in 2.6 compared to 3.3.5 - Citrix ?

I've noticed that since upgrading from Squid 2.6 to Squid 3.3.5 the Citrix ICA 
Client will no longer authenticate via NTLM to squid 3.3.5 - the ICA client 
just keeps popping up asking for NTLM auth - at no stage does it fallback to 
basic auth.

Every other NTLM aware application whether it be IE, Firefox, Chrome and even 
curl works fine and can authenticate no problems via NTLM however the Citrix 
ICA client just won't work.

If I change back to squid 2.6 it works fine. Both are using exactly the same 
squid.conf with...

# Pure NTLM Auth - fallback
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60 startup=15 idle=10 auth_param ntlm keep_alive off

# BASIC Auth - fallback
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic 
auth_param basic children 10 auth_param basic realm Internet Access auth_param 
basic credentialsttl 1 hours

Has anyone else experienced this?

To answer my own question it was due to Citrix ICA Client (I'm using 13.4.0 - 
latest version) ignoring Connection: keep-alive headers in squid 3.3.x and 
starting new connection breaking the NTLM auth challenge.

Squid 2.6.x sends Proxy-Connection: keep-alive with NTLM auth responses which 
is the only header the Citrix ICA Client appears to accept to maintain 
keepalive.

What RFC can I point Citrix at so I can submit a bug with them to fix their 
client and accept both headers? Am I correct in saying that Squid 2.6 is a 
HTTP/1.0 proxy and 3.x are HTTP/1.1 proxies?








The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] Diffence between NTLM in 2.6 compared to 3.3.5 - Citrix ?

2013-05-28 Thread Kris Glynn
I've noticed that since upgrading from Squid 2.6 to Squid 3.3.5 the Citrix ICA 
Client will no longer authenticate via NTLM to squid 3.3.5 - the ICA client 
just keeps popping up asking for NTLM auth - at no stage does it fallback to 
basic auth.

Every other NTLM aware application whether it be IE, Firefox, Chrome and even 
curl works fine and can authenticate no problems via NTLM however the Citrix 
ICA client just won't work.

If I change back to squid 2.6 it works fine. Both are using exactly the same 
squid.conf with...

# Pure NTLM Auth - fallback
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60 startup=15 idle=10
auth_param ntlm keep_alive off

# BASIC Auth - fallback
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Internet Access
auth_param basic credentialsttl 1 hours

Has anyone else experienced this?











The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


RE: [squid-users] Looking for squid spec file

2013-05-13 Thread Kris Glynn
-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Tuesday, 14 May 2013 8:53 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Looking for squid spec file

On 5/13/2013 6:13 PM, Amm wrote:
 Well one can modify it to require for init.d (or whatever that package
 is called)

 Or even pick up spec file from previous Fedora releases.

 Amm


And since someone in the user list have a ready to use spec file just share it 
with me and I will use it.

Now I dont have the head to work on it too much.
Why work hard for a long time to find that someone else have the file already??

Eliezer

I use this for RHEL6 - I guess it should work for Centos

http://netsick.dyndns.org/squid.spec.3.3

Built the latest squid 3.3.4 with it..

The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] DNS search not working - Squid Cache: Version 3.3.3

2013-04-16 Thread Kris Glynn
Hi,

Given the following why doesn't DNS search work given that my nameserver 
1.1.1.1 contain valid DNS entries for test.blue.internal and 
test2.green.internal

GET http://test/
GET http://test2/

.. both return DNS entry not found in squid.


/etc/resolv.conf

options rotate
search blue.internal green.internal
nameserver 1.1.1.1


squidclient -p 8080 mgr:idns

Internal DNS Statistics:

Nameservers:
IP ADDRESS # QUERIES # REPLIES
1.1.1.1 205   205

*snip*
Search list:
blue.internal
green.internal
*snip*


I do not have append_domain set in squid.conf - I've tried adding it but it 
only accepts one domain not two..

Clearly running  squidclient -p 8080 mgr:idns shows that squid has consumed 
my /etc/resolv.conf and I can nslookup test and test2 from the bash shell..



[root@squid]# nslookup
 test
Server: 1.1.1.1
Address:1.1.1.1#53

Name:   test.blue.internal
Address: 192.168.48.41



[root@squid]# nslookup
 test2
Server: 1.1.1.1
Address:1.1.1.1#53

Name:   test2.green.internal
Address: 192.168.48.42


The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] ext_kerberos_ldap_group_acl - how to ?

2013-02-07 Thread Kris Glynn
Hi,

I can not for the life of me work out how to use ext_kerberos_ldap_group_acl 
with squid 3.2.6

I have authentication with negotiate_kerberos_auth working fine but I also want 
authorisation helper for group membership.

Relevant squid.conf config below..

# Kerberos Auth
auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
auth_param negotiate children 40
auth_param negotiate keep_alive on

# Group ACL Helper
external_acl_type ldap_group ttl=60 negative_ttl=60 %LOGIN 
/usr/lib64/squid/ext_kerberos_ldap_group_acl -d -g ALL@ -D my.internal

What is the ALL@ for ? Does someone have a working config against Windows 2008 
AD/LDAP ?

To be honest, at the moment I am using this external helper 
ext_wbinfo_group_acl which is working fine..

external_acl_type ldap_group ttl=300 children-max=50 children-startup=40 %LOGIN 
/usr/lib64/squid/ext_wbinfo_group_acl -K

.. but is ext_kerberos_ldap_group_acl better or should I leave authorisation up 
to ext_wbinfo_group_acl since I have it working?

Is either better than the other?

Thanks
Kris




The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] Squid Cache: Version 3.1.15 - Adding custom header

2011-09-12 Thread Kris Glynn
Hi,

Can I add a custom header to outgoing http requests from squid. I have compiled 
with --enable-http-violations

I've tried something like this below but it doesn't appear to work.

request_header_replace X-Sophos-WSA-ClientIP %SRC

Can anyone suggest a way to achieve this ?











The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia, Pacific Blue or a related entity (or 
the sender if this email is a private communication) and the intended addressee 
and is for the sole use of that intended addressee. If you are not the intended 
addressee, any use, interference with, disclosure or copying of this material 
is unauthorized and prohibited. If you have received this e-mail in error 
please contact the sender immediately and then delete the message and any 
attachment(s). There is no warranty that this email is error, virus or defect 
free. This email is also subject to copyright. No part of it should be 
reproduced, adapted or communicated without the written consent of the 
copyright owner. If this is a private communication it does not represent the 
views of Virgin Australia, Pacific Blue or their related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia, Pacific 
Blue or their related entities may be periodically monitored and reviewed. 
Virgin Australia, Pacific Blue and their related entities respect your privacy. 
Our privacy policy can be accessed from our website:

http://www.virginaustralia.com/


RE: [squid-users] Squid Cache: Version 3.1.15 - Adding custom header

2011-09-12 Thread Kris Glynn
Thank you.

If for instance there was a header to replace and it was an RFC defined header 
- would this work?

request_header_replace X-Sophos-WSA-ClientIP %SRC

Is %SRC a valid parameter ?


Regards

- Kris Glynn: (07) 3295 3987 - 0434602997


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, 13 September 2011 2:58 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Cache: Version 3.1.15 - Adding custom header

On 13/09/11 15:44, Kris Glynn wrote:
 Hi,

 Can I add a custom header to outgoing http requests from squid. I have 
 compiled with --enable-http-violations

 I've tried something like this below but it doesn't appear to work.

 request_header_replace X-Sophos-WSA-ClientIP %SRC

replace is actually *replace*. There must be a copy of the same header removed 
from the request in order to replace.

Additionally this is only possible in current Squid with registered headers 
(RFC defined). Unknown custom headers cannot be replaced like this. Sponsorship 
or patches welcome to enable this for unregistered headers.


 Can anyone suggest a way to achieve this ?


Client IP information is already provided by Squid in these common headers:
   X-Forwarded-For: [... ,] $(client-ip)
   X-Client-IP: $(client-ip)

Ensure forwarded_for is ON (the default) to receive them from your Squid.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.15
   Beta testers wanted for 3.2.0.11
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia, Pacific Blue or a related entity (or 
the sender if this email is a private communication) and the intended addressee 
and is for the sole use of that intended addressee. If you are not the intended 
addressee, any use, interference with, disclosure or copying of this material 
is unauthorized and prohibited. If you have received this e-mail in error 
please contact the sender immediately and then delete the message and any 
attachment(s). There is no warranty that this email is error, virus or defect 
free. This email is also subject to copyright. No part of it should be 
reproduced, adapted or communicated without the written consent of the 
copyright owner. If this is a private communication it does not represent the 
views of Virgin Australia, Pacific Blue or their related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia, Pacific 
Blue or their related entities may be periodically monitored and reviewed. 
Virgin Australia, Pacific Blue and their related entities respect your privacy. 
Our privacy policy can be accessed from our website:

http://www.virginaustralia.com/


RE: [squid-users] Squid 2.6 - Deny all users in a specific Active Directory OU (not group)

2010-05-18 Thread Kris Glynn
Thank you very much Henrik.

A few things I would like to mention.

1. You specify using external_acl_program but I assume you mean 
external_acl_type
2. What does the X mean in this acl line acl ldap_service_accounts external 
ldap_service_accounts X

Again, thanks for the prompt response.

Regards

- Kris Glynn: (07) 3295 3987 - 0434602997


-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Wednesday, 19 May 2010 5:32 AM
To: Kris Glynn
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 2.6 - Deny all users in a specific Active 
Directory OU (not group)

tis 2010-05-18 klockan 14:33 +1000 skrev Kris Glynn:

 I would like to know if it is possible to deny/allow based on a specific OU 
 in Active Directory.

Yes. The squid_ldap_group helper can do this by simply searching for the
user again below that OU and denying access if found.

external_acl_program ldap_service_accounts %LOGIN /usr/lib/squid_ldap_group -R 
-b OU=Service Accounts,dc=company,dc=internal -D username -w password  -f 
((sAMAccountName=%u)(objectClass=Person))  -h 192.168.60.4 
acl ldap_service_accounts external ldap_service_accounts X
http_access deny ldap_service_accounts

If you have many of these OUs that you want to match then the -g option
to squid_ldap_group may be handy, enabling you to add the OU part via
the acl line. But is a little tricky if the OU contains spaces as in
your OU=Service Accounts (requries an acl include file).

Regards
Henrik
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Blue, Pacific Blue or a related entity (or the 
sender if this email is a private communication) and the intended addressee and 
is for the sole use of that intended addressee. If you are not the intended 
addressee, any use, interference with, disclosure or copying of this material 
is unauthorized and prohibited. If you have received this e-mail in error 
please contact the sender immediately and then delete the message and any 
attachment(s). There is no warranty that this email is error, virus or defect 
free. This email is also subject to copyright. No part of it should be 
reproduced, adapted or communicated without the written consent of the 
copyright owner. If this is a private communication it does not represent the 
views of Virgin Blue, Pacific Blue or their related entities. Please be aware 
that the contents of any emails sent to or from Virgin Blue, Pacific Blue or 
their related entities may be periodically monitored and reviewed. Virgin Blue, 
Pacific Blue and their related entities respect your privacy. Our privacy 
policy can be accessed from our website: www.virginblue.com.au


RE: [squid-users] Squid 2.6 - Deny all users in a specific Active Directory OU (not group)

2010-05-18 Thread Kris Glynn
Thanks for the info.

Can the same be achieved with the NTLM helper given this initial configuration ?

external_acl_type ldap_group ttl=300 children=40 %LOGIN 
/usr/lib/squid/wbinfo_group.pl

Can we allow/deny users in a specific OU with NTLM ?


Regards

- Kris Glynn: (07) 3295 3987 - 0434602997


-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Wednesday, 19 May 2010 11:02 AM
To: Kris Glynn
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid 2.6 - Deny all users in a specific Active 
Directory OU (not group)

ons 2010-05-19 klockan 10:54 +1000 skrev Kris Glynn:
 Thank you very much Henrik.
 
 A few things I would like to mention.
 
 1. You specify using external_acl_program but I assume you mean 
 external_acl_type

Correct.

 2. What does the X mean in this acl line acl ldap_service_accounts 
 external ldap_service_accounts X

It's a dummy group name. The helper is designed for group lookups and
not sure the helper is happy without a group argument. Actually ignored
due to the filter not including %g for group.

Regards
Henrik
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Blue, Pacific Blue or a related entity (or the 
sender if this email is a private communication) and the intended addressee and 
is for the sole use of that intended addressee. If you are not the intended 
addressee, any use, interference with, disclosure or copying of this material 
is unauthorized and prohibited. If you have received this e-mail in error 
please contact the sender immediately and then delete the message and any 
attachment(s). There is no warranty that this email is error, virus or defect 
free. This email is also subject to copyright. No part of it should be 
reproduced, adapted or communicated without the written consent of the 
copyright owner. If this is a private communication it does not represent the 
views of Virgin Blue, Pacific Blue or their related entities. Please be aware 
that the contents of any emails sent to or from Virgin Blue, Pacific Blue or 
their related entities may be periodically monitored and reviewed. Virgin Blue, 
Pacific Blue and their related entities respect your privacy. Our privacy 
policy can be accessed from our website: www.virginblue.com.au


[squid-users] Squid 2.6 - Deny all users in a specific Active Directory OU (not group)

2010-05-17 Thread Kris Glynn
Hi,

I would like to know if it is possible to deny/allow based on a specific OU in 
Active Directory.

Problem: I have an OU (OU=Service Accounts,dc=company,dc=internal) that 
contains accounts that should not be allowed access through squid.

How would I go about denying access to all users in OU=Service 
Accounts,dc=company,dc=internal given my current ldap configuration below.

auth_param basic program /usr/lib/squid/squid_ldap_auth -R -b 
dc=company,dc=internal -D username -w password -f 
((sAMAccountName=%s)(objectClass=Person)) -t 10 -h 192.168.60.4 
auth_param basic children 40
auth_param basic realmInternet Access
auth_param basic credentialsttl   1 hours

external_acl_type ldap_group ttl=3600 children=60 %LOGIN 
/usr/lib/squid/squid_ldap_group -R -b dc=company,dc=internal -B 
dc=company,dc=internal -F ((sAMAccountName=%s)(objectClass=Person)) -f 
((member=%v)(cn=%a)) -D username -w password -h 192.168.60.4 -P

Thanks
Kris
The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Blue, Pacific Blue or a related entity (or the 
sender if this email is a private communication) and the intended addressee and 
is for the sole use of that intended addressee. If you are not the intended 
addressee, any use, interference with, disclosure or copying of this material 
is unauthorized and prohibited. If you have received this e-mail in error 
please contact the sender immediately and then delete the message and any 
attachment(s). There is no warranty that this email is error, virus or defect 
free. This email is also subject to copyright. No part of it should be 
reproduced, adapted or communicated without the written consent of the 
copyright owner. If this is a private communication it does not represent the 
views of Virgin Blue, Pacific Blue or their related entities. Please be aware 
that the contents of any emails sent to or from Virgin Blue, Pacific Blue or 
their related entities may be periodically monitored and reviewed. Virgin Blue, 
Pacific Blue and their related entities respect your privacy. Our privacy 
policy can be accessed from our website: www.virginblue.com.au


[squid-users] storeUpdateCopy Error issue

2010-04-24 Thread Kris

Dear,

i run my proxy well with 100-200 hit per second. it run smooth but after 
few day , my user complained proxy become slow , in cache.log i see 
bunch of eror like this


storeUpdateCopy: Error at 280 (-1)
storeUpdateCopy: Error at 384 (-1)

for current temporary fix i removed all cache and build new one. it happened me 
twice on 2 weeks. any suggestion what cause this problem ?



Re: [squid-users] storeUpdateCopy Error issue

2010-04-24 Thread Kris
yes i`m using new harddisk 1.5TB but only use 300gb as cache dir and use 
reiserfs filesystem. my machine restart everyday using crond.


Jeff Pang wrote:

On Sat, Apr 24, 2010 at 3:06 PM, Kris christ...@wanxp.com wrote:
  

Dear,

i run my proxy well with 100-200 hit per second. it run smooth but after few
day , my user complained proxy become slow , in cache.log i see bunch of
eror like this

storeUpdateCopy: Error at 280 (-1)
storeUpdateCopy: Error at 384 (-1)

for current temporary fix i removed all cache and build new one. it happened
me twice on 2 weeks. any suggestion what cause this problem ?




It seems a disk writing error.
Are you sure the hard disk and filesystem are right?


  




Re: [squid-users] Best Configuration for sibling peer

2009-12-16 Thread Kris

Chris Robertson wrote:

Kris wrote:

Dear All,

i have 4 proxy server with about 1000 request per second average , i 
have extra free nic in every server and i connected that 4 proxy to 1 
switch and give them 1 network ip. i set 4 proxy as SIBLING each 
other. after few days i got problem like.


1. weird error log (sometimes)

2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:28| storeClientReadHeader: no URL!
2009/12/16 09:48:29| storeClientReadHeader: no URL!

2. TCP Connection Failed

2009/12/16 09:45:15| TCP connection to 10.10.10.10 (10.10.10.10:3128) 
failed
2009/12/16 09:45:22| TCP connection to 10.10.10.11 (10.10.10.11:3128) 
failed
2009/12/16 09:45:22| TCP connection to 10.10.10.11 (10.10.10.11:3128) 
failed
2009/12/16 09:45:28| TCP connection to 10.10.10.13 (10.10.10.13:3128) 
failed
2009/12/16 09:45:36| TCP connection to 10.10.10.12 (10.10.10.12:3128) 
failed


3. sometimes it become hard to browse , but it going normal after i 
disabled all SIBLING, if i enabled it will slow again.


my file descriptor already 65535

my peer config
# cache_peer 10.10.10.10 sibling 3128 3130 no-netdb-exchange 
no-digest no-delay round-robin proxy-only
cache_peer 10.10.10.11 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only
cache_peer 10.10.10.12 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only
cache_peer 10.10.10.13 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only


any suggestion what best configuration for sibling peer ?


I don't think that round-robin makes much sense with sibling 
selection.  Either the a sibling has the object (and we grab the 
object from it) or it doesn't (and we grab the object directly).


For myself, I use a line like...

cache_peer proxypool-2.my.domain sibling 8080 3130 no-digest

Chris


did you ever get message like  TCP Connection Failed ? just curious 
why i always got that message , peer connection use same network in same 
switch.


Re: [squid-users] Best Configuration for sibling peer

2009-12-16 Thread Kris

Michael Bowe wrote:

-Original Message-
From: Kris [mailto:christ...@wanxp.com]



Hi Kris

  

2. TCP Connection Failed



Are you running iptables?
If so, is the conntrack table overflowing?

  

my peer config
# cache_peer 10.10.10.10 sibling 3128 3130 no-netdb-exchange no-digest
no-delay round-robin proxy-only
cache_peer 10.10.10.11 sibling 3128 3130 no-netdb-exchange no-digest
no-delay round-robin proxy-only
cache_peer 10.10.10.12 sibling 3128 3130 no-netdb-exchange no-digest
no-delay round-robin proxy-only
cache_peer 10.10.10.13 sibling 3128 3130 no-netdb-exchange no-digest
no-delay round-robin proxy-only

any suggestion what best configuration for sibling peer ?



I'm not sure the above is going to give you good results.

Enabling digest would save a lot of ICP traffic / lookups

As pointed out by Chris, round-robin option is used with parent selection
in the absence of ICP. But in your case you are using peers with ICP. 


If you are trying to prevent overlapping disk objects between the siblings
then I reckon your syntax should just be something like this :

cache_peer 10.10.10.1x sibling 3128 3130 proxy-only

Michael.


  

i`m not use any iptables.

about  proxy-only lines,  isnt it good to disable proxy-only ? so the 
proxy dont need to take same cache in sibling, it will save icp request 
right ?


another question what ms should i use for these line ?

icp_query_timeout 2000
maximum_icp_query_timeout 2000



Re: [squid-users] Best Configuration for sibling peer

2009-12-16 Thread Kris

Chris Robertson wrote:

Kris wrote:

Chris Robertson wrote:

Kris wrote:

Dear All,

i have 4 proxy server with about 1000 request per second average , 
i have extra free nic in every server and i connected that 4 proxy 
to 1 switch and give them 1 network ip. i set 4 proxy as SIBLING 
each other. after few days i got problem like.


1. weird error log (sometimes)

2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:28| storeClientReadHeader: no URL!
2009/12/16 09:48:29| storeClientReadHeader: no URL!

2. TCP Connection Failed

2009/12/16 09:45:15| TCP connection to 10.10.10.10 
(10.10.10.10:3128) failed
2009/12/16 09:45:22| TCP connection to 10.10.10.11 
(10.10.10.11:3128) failed
2009/12/16 09:45:22| TCP connection to 10.10.10.11 
(10.10.10.11:3128) failed
2009/12/16 09:45:28| TCP connection to 10.10.10.13 
(10.10.10.13:3128) failed
2009/12/16 09:45:36| TCP connection to 10.10.10.12 
(10.10.10.12:3128) failed


3. sometimes it become hard to browse , but it going normal after i 
disabled all SIBLING, if i enabled it will slow again.


my file descriptor already 65535

my peer config
# cache_peer 10.10.10.10 sibling 3128 3130 no-netdb-exchange 
no-digest no-delay round-robin proxy-only
cache_peer 10.10.10.11 sibling 3128 3130 no-netdb-exchange 
no-digest no-delay round-robin proxy-only
cache_peer 10.10.10.12 sibling 3128 3130 no-netdb-exchange 
no-digest no-delay round-robin proxy-only
cache_peer 10.10.10.13 sibling 3128 3130 no-netdb-exchange 
no-digest no-delay round-robin proxy-only


any suggestion what best configuration for sibling peer ?


I don't think that round-robin makes much sense with sibling 
selection.  Either the a sibling has the object (and we grab the 
object from it) or it doesn't (and we grab the object directly).


For myself, I use a line like...

cache_peer proxypool-2.my.domain sibling 8080 3130 no-digest

Chris


did you ever get message like  TCP Connection Failed ? just curious 
why i always got that message , peer connection use same network in 
same switch.


My peers use the same interface to talk to clients and each other.  I 
don't see TCP Connection Failed messages.  Hmmm...  One thing you 
might try...


acl myPeers src 10.10.10.0/24
tcp_outgoing_address 10.10.10.10 myPeers

...to make sure it responds to the peers using the right interface.

Chris




i have 1000 request hit per sec in every proxy , is that can be a cause ?


[squid-users] Best Configuration for sibling peer

2009-12-15 Thread Kris

Dear All,

i have 4 proxy server with about 1000 request per second average , i 
have extra free nic in every server and i connected that 4 proxy to 1 
switch and give them 1 network ip. i set 4 proxy as SIBLING each other. 
after few days i got problem like.


1. weird error log (sometimes)

2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:25| storeClientReadHeader: no URL!
2009/12/16 09:48:28| storeClientReadHeader: no URL!
2009/12/16 09:48:29| storeClientReadHeader: no URL!

2. TCP Connection Failed

2009/12/16 09:45:15| TCP connection to 10.10.10.10 (10.10.10.10:3128) failed
2009/12/16 09:45:22| TCP connection to 10.10.10.11 (10.10.10.11:3128) failed
2009/12/16 09:45:22| TCP connection to 10.10.10.11 (10.10.10.11:3128) failed
2009/12/16 09:45:28| TCP connection to 10.10.10.13 (10.10.10.13:3128) failed
2009/12/16 09:45:36| TCP connection to 10.10.10.12 (10.10.10.12:3128) failed

3. sometimes it become hard to browse , but it going normal after i 
disabled all SIBLING, if i enabled it will slow again.


my file descriptor already 65535

my peer config
# cache_peer 10.10.10.10 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only
cache_peer 10.10.10.11 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only
cache_peer 10.10.10.12 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only
cache_peer 10.10.10.13 sibling 3128 3130 no-netdb-exchange no-digest 
no-delay round-robin proxy-only


any suggestion what best configuration for sibling peer ?


[squid-users] Compressing Object

2007-12-11 Thread Kris

Dear,

Is it possible for squid to Compress object before send it to client`s 
browser ? or child squid ?


Regards,


Kristian


Re: [squid-users] Compressing Object

2007-12-11 Thread Kris

Dear,

i saw a company that provide proxy acceleator with compression.

i dont know if it permitable to post their company link here (afraid of 
adv abuse). the website said they`r using squid for compressing.


Adrian Chadd wrote:

On Tue, Dec 11, 2007, Falk Husemann wrote:

  

Not yet. Transfer Encoding is in the pipeline in future squid releases
but like everything its chasing a sponsor for it.
  
Is there something in the tree yet? I'm running a quite large cache  
here and would like to help get transfer encoding going. Especially  
for our WLAN-Users :-)



There's some preliminary experimental work done with squid-3 but it was
done a while ago and I'm not sure what the timeframe for getting that
to work. I forget where that work is too, I think its somewhere in
devel.squid-cache.org.

Squid-2 isn't yet right enough to implement transfer encodings. There needs
to be some improvements to the client-side and server-side codebase to support
transfer encodings. Which may end up happening in time..




Adrian



  




Re: [squid-users] Compressing Object

2007-12-11 Thread Kris

i already sent their company link to you sir :)

Adrian Chadd wrote:

On Tue, Dec 11, 2007, Kris wrote:
  

Dear,

i saw a company that provide proxy acceleator with compression.

i dont know if it permitable to post their company link here (afraid of 
adv abuse). the website said they`r using squid for compressing.



Forward me the details? I'll have a chat with them.



Adrian



  




Re: [squid-users] squid cant cache flv (youtube)

2007-08-27 Thread Kris

so that means it`s impossible to cache youtube using squid now ?

Manoj_Rajkarnikar wrote:



On Sun, 26 Aug 2007, Kris wrote:

Tried to put reload-into-ims but got same case , youtube still cant 
be cached. any clue ?


already discussed this in another thread with subject refresh 
patterns. in the full url of the youtube video, there are few 
parameters that are never same. so basically everytime you request for 
the same file, there url for it is different than the other one and 
thus will never get hit unless someone writes a workaround/patch to 
rewrite the urls while storing and looking up youtube videos.


Manoj




Re: [squid-users] squid cant cache flv (youtube)

2007-08-27 Thread Kris
If you know and got it worked on your system, do you mind if you share 
you conf to us ? .


Adrian Chadd wrote:

On Mon, Aug 27, 2007, Manoj_Rajkarnikar wrote:

  
Yes It does appear so. FYI its not entire youtube content that's 
uncacheable, only the flash videos and some image files. you can get hit 
on rest of the thumbnail images and contents though.



And I'm telling you, its possible to partially cache youtube video content
without modifying Squid.




Adrian



  




Re: [squid-users] squid cant cache flv (youtube)

2007-08-26 Thread Kris
Tried to put reload-into-ims but got same case , youtube still cant be 
cached. any clue ?


Kris wrote:

do you mind if you post your conf here ? or send me private , very thx b4

Adrian Chadd wrote:

On Sat, Aug 25, 2007, Kris wrote:
 

so i must put reload-into-ims on refresh pattern for flv/swf ?



You can try that.

All I know is that I'm able to cache Youtube right now. ;0




adrian



  








Re: [squid-users] squid cant cache flv (youtube)

2007-08-26 Thread Kris

Tied it but no luck. still cant cache youtube T_T

zulkarnain wrote:

--- Kris [EMAIL PROTECTED] wrote:
  

Tried to put reload-into-ims but got same case ,
youtube still cant be 
cached. any clue ?





Kris,

Change reload-into-ims to ignore-no-cache.

Regards,
Zul




   
Ready for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/



  




Re: [squid-users] squid cant cache flv (youtube)

2007-08-25 Thread Kris

do you mind if you post your conf here ? or send me private , very thx b4

Adrian Chadd wrote:

On Sat, Aug 25, 2007, Kris wrote:
  

so i must put reload-into-ims on refresh pattern for flv/swf ?



You can try that.

All I know is that I'm able to cache Youtube right now. ;0




adrian



  




[squid-users] squid cant cache flv (youtube)

2007-08-24 Thread Kris

I planned to cache youtube flv using squid

my squid :

[EMAIL PROTECTED]:~# squid -v
Squid Cache: Version 2.6.STABLE13

my conf was like this :

acl video_cache dstdomain .youtube.com .google.com .llnwd.net 
.dailymotion.com .friendster.com .googlevideo.com

acl video_regex urlpath_regex get_video
cache allow video_cache
cache allow video_regex

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

cache_mem 1024 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 50 MB
minimum_object_size 1 KB
maximum_object_size_in_memory 1 MB

i tried to access one of flv video in youtube but i never hot TCP_HIT, 
it always appear TCP_MISS (also refresh page by press F5).


access.log was like this :

[24/Aug/2007:18:47:45 +0700] GET http://74.125.15.24/get_video? 
HTTP/1.1 200 1176075 TCP_MISS:DIRECT


then i refresh page by press F5

[24/Aug/2007:18:48:15 +0700] GET http://74.125.15.24/get_video? 
HTTP/1.1 200 1176075 TCP_MISS:DIRECT


any clue ?


Re: [squid-users] squid cant cache flv (youtube)

2007-08-24 Thread Kris

so you got your proxy didnt cache flv file in youtube too ?

Adrian Chadd wrote:

On Fri, Aug 24, 2007, Kris wrote:
  

I planned to cache youtube flv using squid



Yup. At the very least:

acl youtube dstdomain .youtube.com
cache allow youtube

Then unfortunately some work needs to be done to Squid to allow
URLs to be made equivalent so all the various youtube video
servers are considered as the same host. This step will allow
google earth/maps to be cached effectively too.

So, who would like to sponsor some work to make caching Google
Maps/Earth and video sites like Youtube actually work?




Adrian



  




[squid-users] Sibling cache not working ?

2007-08-13 Thread Kris

Hi all,

I have 3 Proxy with Sibling mode each other. I have a question. If i 
download file from Proxy A then after i download again , i got full 
speed (cached). but if i download from Proxy B i got slow speed like 
before cached on Proxy. is it normal ? why Proxy 2 didnt take Cache on 
Proxy 1 ?


this`s my configuration

Proxy 1:
cache_peer 10.1.1.2 sibling 3128 3110 no-digest
cache_peer 10.1.1.3 sibling 3128 3110 no-digest

Proxy 2:
cache_peer 10.1.1.1 sibling 3128 3110 no-digest
cache_peer 10.1.1.3 sibling 3128 3110 no-digest

Proxy 3:
cache_peer 10.1.1.1 sibling 3128 3110 no-digest
cache_peer 10.1.1.2 sibling 3128 3110 no-digest


any clue ?


[squid-users] Question bout Sibling Cache

2007-08-03 Thread Kris

Hi all,

I have 3 Proxy with Sibling mode each other. I have a question. If i 
download file from Proxy A then after i download again , i got full 
speed (cached). but if i download from Proxy B i got slow speed like 
before cached on Proxy. is it normal ? why Proxy 2 didnt take Cache on 
Proxy 1 ?


this`s my configuration

Proxy 1:
cache_peer 10.1.1.2 sibling 3128 3110 no-digest
cache_peer 10.1.1.3 sibling 3128 3110 no-digest

Proxy 2:
cache_peer 10.1.1.1 sibling 3128 3110 no-digest
cache_peer 10.1.1.3 sibling 3128 3110 no-digest

Proxy 3:
cache_peer 10.1.1.1 sibling 3128 3110 no-digest
cache_peer 10.1.1.2 sibling 3128 3110 no-digest


any clue ?


[squid-users] Acl + Delay Pool based on Size

2007-07-30 Thread Kris

Dear,

is it possible ? ex : i want to set a delay pool for limit the bw if 
someone download file bigger than 20Mb. and if lower than 20Mb it`ll be 
no delay pool. i know how to use delay pool but i dont know how to set 
ACL to lock the max size .  any clue ?


[squid-users] Acl + Delay Pool based on download size

2007-07-30 Thread Kris

Dear,

is it possible ? ex : i want to set a delay pool for limit the bw if 
someone download file bigger than 20Mb. and if lower than 20Mb it`ll be 
no delay pool. i know how to use delay pool but i dont know how to set 
ACL to lock the max size .  any clue ?


[squid-users] Some clients not able to access through WCCP

2005-10-10 Thread Kris Amy
Hi All,

I've stumbled on a strange problem and note sure where to start looking.

I've implemented and got our squid boxes working with WCCP to our cisco
router. But some clients can't browse the web anymore. I don't see them
hitting the access.log. But when I turn off WCCP they can browse fine.

This seems to be mainly with out our dial-up users. (It's fine for the ADSL
users). It's around 20 users so far (out of ~900.

It's running on FreeBSD 5.4

Any ideas on where to start looking would be appreciated.

Cheers,
KA



RE: [squid-users] Some clients not able to access through WCCP

2005-10-10 Thread Kris Amy
Ahh that's what I thought but wasn't sure.

What is the easiest way to do that?

Cheers,
KA

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 11 October 2005 10:19 AM
To: Kris Amy
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Some clients not able to access through WCCP


On Mon, 10 Oct 2005, Kris Amy wrote:

 I've implemented and got our squid boxes working with WCCP to our cisco
 router. But some clients can't browse the web anymore. I don't see them
 hitting the access.log. But when I turn off WCCP they can browse fine.

MTU issues perhaps?

WCCP have a lot of trouble with PMTU discovery in the proxy-client 
direction.

Try disabling PMTU discovery on the proxy for traffic towards the clients 
(or perhaps even globally for all connections.. PMTU discovery generally 
hurts web traffic more than what is gained)

Regards
Henrik


RE: [squid-users] Some clients not able to access through WCCP

2005-10-10 Thread Kris Amy
I should probably clarrify.

Should I do it on the FBSD box or do it on the cisco?

Cheers,
KA

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 11 October 2005 10:19 AM
To: Kris Amy
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Some clients not able to access through WCCP


On Mon, 10 Oct 2005, Kris Amy wrote:

 I've implemented and got our squid boxes working with WCCP to our cisco
 router. But some clients can't browse the web anymore. I don't see them
 hitting the access.log. But when I turn off WCCP they can browse fine.

MTU issues perhaps?

WCCP have a lot of trouble with PMTU discovery in the proxy-client 
direction.

Try disabling PMTU discovery on the proxy for traffic towards the clients 
(or perhaps even globally for all connections.. PMTU discovery generally 
hurts web traffic more than what is gained)

Regards
Henrik


RE: [squid-users] Some clients not able to access through WCCP

2005-10-10 Thread Kris Amy
Well I just tried this, but it didn't solve the problem.

I also noticed that everything works fine when you have mppp turned on for
single connections but when turned off it doesn't.

I also found I can get to some sites (hp.com) but not to others
(google.com).

Cheers,
KA

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 11 October 2005 10:52 AM
To: Kris Amy
Cc: Henrik Nordstrom; squid-users@squid-cache.org
Subject: RE: [squid-users] Some clients not able to access through WCCP




On Tue, 11 Oct 2005, Kris Amy wrote:

 I should probably clarrify.

 Should I do it on the FBSD box or do it on the cisco?

The FBSD box.

Regards
Henrik



[squid-users] async-io

2003-03-19 Thread kris

What numbert of N_THREADS should i set in --enable-async-io[=N_THREADS]

I have two 30GB scsi disks and about 160 req/sec


kris





Re: [squid-users] async-io

2003-03-19 Thread kris

 You should not set it. The automatic default is good.

Do You remember my questions about the very high load average ?

I try to do something and i try to use aufs, and...  my load changes from
4 to 10 : and.  when i changed aufs to ufs load decreases to 2

I think that so high load with aufs was because of wrong default
parameters but now i dont know anything. :(

Can someone help me ?

kris





Re: [squid-users] Heavy load

2003-02-21 Thread kris
Some information.

Number of HTTP requests received:   648417
Number of ICP messages received:33035
Number of ICP messages sent:35967
Number of queued ICP replies:   0
Request failure ratio:   0.00%
Average HTTP requests per minute since start:   10199.8
Average ICP messages per minute since start:1085.4
Select loop called: 468043 times, 8.149 ms avg



UP Time:3814.281 seconds
CPU Time:   3196.525 seconds
CPU Usage:  83.80%
CPU Usage, 5 minute avg:90.22%
CPU Usage, 60 minute avg:   85.58%
Maximum Resident Size: 0 KB
Page faults with physical i/o: 674

Total space in arena:  495744 KB
Ordinary blocks:   493958 KB  32833 blks
Small blocks:   0 KB  0 blks
Holding blocks: 54484 KB 20 blks
Free Small blocks:  0 KB
Free Ordinary blocks:1786 KB
Total in use:  548442 KB 100%
Total free:  1786 KB 0%
Total size:550228 KB



kris





Re: [squid-users] Heavy load

2003-02-21 Thread kris

 What kind of raid are you using? (raid level)


no raid level, only the controller each disk separatelly


 On how many physical disks?

3, one system and two adre cache disks


 And when did your cache become filled?  There is a significant
 difference in performance between an empty cache being filled and a
 cache which is full and being recycled..

just after start, after validation procedure, and all the time,


kris





RE: [squid-users] Heavy load

2003-02-20 Thread kris
 I have a squid machine based on 1 Xeon 2.8GHz and SCSI discs, i have
 about 260 req/s and very high load average (about 4 to 5)
 My squid is Version 2.5.STABLE1-20030114 running on RH8.0 SCSI U160
 Xeon 2.8GHZ
 Is that load normal ? I think that it's to high
 What's the problem ?

 I guess too less RAM and too much cache_dir.
 I guess squid is swapping.


Server has 2GB ram and squid process uses up to 900MB of ram,
swap is not used,

now i have squid-2.5.STABLE1-20030114 and i think that the load appears
when i upgraded soft from ...i think squid-2.5.STABLE1-20021015 or
earlier (i dont remember) I can try downgrade and check but before i try
to ask ... :)


kris