Re: [squid-users] Maximum disk cache size per worker

2013-03-22 Thread Amos Jeffries

On 22/03/2013 4:39 p.m., Alex Rousskov wrote:

On 03/21/2013 08:11 PM, Sokvantha YOUK wrote:


Thank you for your advice. If I want large files to be cached when it
fist seen by worker, My config should change to first worker that see
large file and cache it else left it over to remaining worker for rock
store worker?

Your OS assigns workers to incoming connections. Squid does not control
that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached.



I don't want cached content to be duplicated
among AUFS cache_dir and I want to use the advantage of rock store
which can be shared within worker on SMP deployment.

The above is not yet possible using official code. Your options include:

1. Do not cache large files.

2. Cache large files in isolated per-worker ufs-based cache_dirs,
one ufs-based cache_dir per worker,
suffering from false misses and duplicates.
I believe somebody reported success with this approach. YMMV.

3. Cache large files in SMP-aware rock cache_dirs,
using unofficial experimental Large Rock branch
that does not limit the size of cached objects to 32KB:
http://wiki.squid-cache.org/Features/LargeRockStore



4. Setup the SMP equivalent of a CARP peering hierarchy with the 
frontend workers using shared rock caches and the backend using UFS. 
This minimizes cache duplication. But in the current SMP code requires 
disabling loop detection (probably not a good thing) and some advanced 
configuration trickery.
If you want to actually go down that path let me know and I'll put the 
details together.


Amos


Re: [squid-users] Problem with Squid Helper and NTLM authentication

2013-03-22 Thread Amos Jeffries

On 22/03/2013 3:27 p.m., Carlos Daniel Perez wrote:

Hi Everyone,

I try to configure now with ntlm but i have some trouble... apparently
the helper-protocol-squid-2.5-
ntlmssp doesn't work...
I put my squid server into AD domain, even i can use
/usr/bin/ntlm_auth --username --domain --password and result is OK but
the parameter of squid helper say BH NTLMSSP query invalid...

Commands like wbinfo -u, wbinfo-g and wbinfo -a works fine, for that
reason i think that my problem is specifically the helper... The time
is synchronized with my AD trough ntpdate.

Have any idea about what can be happend?


Squid version ?

Amos


Re: [squid-users] Maximum disk cache size per worker

2013-03-22 Thread Sokvantha YOUK
Dear Amos,

I am pretty sure love to go down to try SMP equivalent of a CARP
peering. Please guide me.

---
Regards,
Vantha

On Fri, Mar 22, 2013 at 1:13 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 22/03/2013 4:39 p.m., Alex Rousskov wrote:

 On 03/21/2013 08:11 PM, Sokvantha YOUK wrote:

 Thank you for your advice. If I want large files to be cached when it
 fist seen by worker, My config should change to first worker that see
 large file and cache it else left it over to remaining worker for rock
 store worker?

 Your OS assigns workers to incoming connections. Squid does not control
 that assignment. For the purposes of designing your storage, you may
 assume that the next request goes to a random worker. Thus, each of your
 workers must cache large files for files to be reliably cached.


 I don't want cached content to be duplicated
 among AUFS cache_dir and I want to use the advantage of rock store
 which can be shared within worker on SMP deployment.

 The above is not yet possible using official code. Your options include:

 1. Do not cache large files.

 2. Cache large files in isolated per-worker ufs-based cache_dirs,
 one ufs-based cache_dir per worker,
 suffering from false misses and duplicates.
 I believe somebody reported success with this approach. YMMV.

 3. Cache large files in SMP-aware rock cache_dirs,
 using unofficial experimental Large Rock branch
 that does not limit the size of cached objects to 32KB:
 http://wiki.squid-cache.org/Features/LargeRockStore


 4. Setup the SMP equivalent of a CARP peering hierarchy with the frontend
 workers using shared rock caches and the backend using UFS. This minimizes
 cache duplication. But in the current SMP code requires disabling loop
 detection (probably not a good thing) and some advanced configuration
 trickery.
 If you want to actually go down that path let me know and I'll put the
 details together.

 Amos



-- 

Regards,
Vantha


Re: [squid-users] squid/SMP

2013-03-22 Thread Adam W. Dace
Thanks, I was curious.

On Thu, Mar 21, 2013 at 5:39 PM, Eugene M. Zheganin e...@norma.perm.ru wrote:
 Hi.


 On 21.03.2013 17:01, Adam W. Dace wrote:

 I had this exact problem on a different platform, Mac OS X.

 You probably want to use sysctl to increase the OS-default limits on
 Unix Domain Sockets.
 They're mentioned at the bottom of the squid Wiki page here:
 http://wiki.squid-cache.org/Features/SmpScale

 Please mail the list if you don't mind once you try that, I then ran
 into a different problem but most likely FreeBSD isn't affected.


 Thanks a lot, this helped. Seems to be working after that; at least I got no
 complains yet.

 Eugene.



-- 

Adam W. Dace colonelforbi...@gmail.com

Phone: (815) 355-5848
Instant Messenger: AIM  Yahoo! IM - colonelforbin74 | ICQ - #39374451
Microsoft Messenger - colonelforbi...@live.com

Google Profile: http://www.google.com/profiles/ColonelForbin74


[squid-users] acl time stop at specified hour

2013-03-22 Thread Orlando Camarillo
Hi brothers.

I have running Squid HTTP Proxy 3.0 over Debian, everything is working
fine, just i got weird behavior with the acl time, every single day
stop working at 1723 hrs.
this is my acl for deny sites: acl dstRestricted url_regex
/etc/squid3/dstRestricted,

and aply the acl like this: http_access allow lan !dstRestricted

and here is my acl time:

acl t1 time MTWHF 07:00-21:00
acl t2 time MTWHF 14:00-15:30
acl t3 MTWHF 21:00-24:00
acl t4 time MTWHF 00:00-07:00

and the issue happend between 1723 hrs. and  2100 hrs, so the blocked
sites becomes unlocked and the users can get access it. i tryed with a
lot of combinations on acl t1, t2, t3, t4, but never works.

I wonder if is possible to get some help from you guys.
I´ll be appreciate.
Thanks in advance.
Jah bless.


[squid-users] Re: Maximum disk cache size per worker

2013-03-22 Thread babajaga
Your OS assigns workers to incoming connections. Squid does not control
that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached. 

But, I think such a config SHOULD avoid duplication:

if ${process_number}=1
cache_dir  aufs /cache4/squid/${process_number} 17 32 256
min-size=31001 max-size=20
cache_dir  aufs /cache5/squid/${process_number} 17 32 256
min-size=21 max-size=40
cache_dir  aufs /cache6/squid/${process_number} 17 32 256
min-size=41 max-size=80
cache_dir  aufs /cache7/squid/${process_number} 17 32 256
min-size=80
endif 

Am I wrong ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Maximum-disk-cache-size-per-worker-tp4659105p4659144.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Why is this un-cacheable?

2013-03-22 Thread csn233
URL: 
http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/en_US/AdbeRdr950_en_US.exe

It shows a MISS, regardless of how I tweak the refresh_pattern,
including the adding of all the override* and ignore* options:

Last-Modified: Wed, 04 Jan 2012 07:08:53 GMT
...
X-Cache: MISS from ...
X-Cache-Lookup: MISS from ...


What have I missed, so to speak?


Re: [squid-users] Re: Maximum disk cache size per worker

2013-03-22 Thread Sokvantha Youk

On 3/22/13 2:43 PM, babajaga wrote:

Your OS assigns workers to incoming connections. Squid does not control

that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached. 

But, I think such a config SHOULD avoid duplication:

if ${process_number}=1
cache_dir  aufs /cache4/squid/${process_number} 17 32 256
min-size=31001 max-size=20
cache_dir  aufs /cache5/squid/${process_number} 17 32 256
min-size=21 max-size=40
cache_dir  aufs /cache6/squid/${process_number} 17 32 256
min-size=41 max-size=80
cache_dir  aufs /cache7/squid/${process_number} 17 32 256
min-size=80
endif

Am I wrong ?

I don't know how do I find out if there is duplication cached content. 
Where can I find out the duplication?


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Maximum-disk-cache-size-per-worker-tp4659105p4659144.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Maximum disk cache size per worker

2013-03-22 Thread Sokvantha Youk

On 3/22/2013 1:13 PM, Amos Jeffries wrote:

On 22/03/2013 4:39 p.m., Alex Rousskov wrote:

On 03/21/2013 08:11 PM, Sokvantha YOUK wrote:


Thank you for your advice. If I want large files to be cached when it
fist seen by worker, My config should change to first worker that see
large file and cache it else left it over to remaining worker for rock
store worker?

Your OS assigns workers to incoming connections. Squid does not control
that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached.



I don't want cached content to be duplicated
among AUFS cache_dir and I want to use the advantage of rock store
which can be shared within worker on SMP deployment.

The above is not yet possible using official code. Your options include:

1. Do not cache large files.

2. Cache large files in isolated per-worker ufs-based cache_dirs,
one ufs-based cache_dir per worker,
suffering from false misses and duplicates.
I believe somebody reported success with this approach. YMMV.

3. Cache large files in SMP-aware rock cache_dirs,
using unofficial experimental Large Rock branch
that does not limit the size of cached objects to 32KB:
http://wiki.squid-cache.org/Features/LargeRockStore



4. Setup the SMP equivalent of a CARP peering hierarchy with the 
frontend workers using shared rock caches and the backend using UFS. 
This minimizes cache duplication. But in the current SMP code requires 
disabling loop detection (probably not a good thing) and some advanced 
configuration trickery.
If you want to actually go down that path let me know and I'll put the 
details together.


Amos

Dear Amos,

May you show me how to achieve at #4?

---
Regards,
Vantha


Re: [squid-users] Eliminate PopUP authentication for web Windows Users

2013-03-22 Thread Amos Jeffries

On 22/03/2013 11:18 a.m., Leonardo Rodrigues wrote:


basic authentication type will always prompt for 
username/password, there's nothing wrong with it and no way to avoid 
it nor 'fix' it as there's nothing wrong at all




Not true. There is no more or less reason for Basic auth scheme to cause 
a popup than any other. If the browser is able to find credentials that 
will work against the proxy it can send them without a popup asking for 
others. This is true for *all* authentication types. How the browser 
gets credentials is all well outside the scope of Squid interaction. 
User popup is one potential source of credentials amongst many.



if your users are authenticated in your domain and you want squid do 
'automagically' use those credentials for web surfing, then you'll 
have to change your authentication type to ntlm or digest or negotiate.


i have LOTS of squid boxes authenticanting on ADs using ntlm 
authentication type. It's a lot more complicated to configure than 
basic type but, once configured, it works just fine and simply.


On the other hand NTLM is officially deprecated more than 10 years ago 
and officially removed from the last several generations of MS products. 
Carlos, if you don't already know and use NTLM try to go straight to 
Kerberos with the Negotiate auth scheme.




Em 21/03/13 18:45, Carlos Daniel Perez escreveu:

Hi,

I have a Squid server configured to make querys in one ActiveDirectory
server trough squid_ldap_group. The query it's OK and authenticated 
users
can surf the web. But, my users need to put their users and password 
when

open a browser.

[ ... ]
My squid_ldap_auth line is: auth_param basic program
/usr/lib/squid3/squid_ldap_auth -R -d -b dc=enterprise,dc=com -D
cn=support,cn=Users,dc=enterprise,dc=com -w 12345 -f sAMAccountName=%s
-h
192.168.2.1




What traffic is going through? I think that helper does not strip the 
Windows realm off the username if the browser is sending the NTLM 
credentials across Basic scheme.


What version of Squid are you using (looks old if it still contains 
binary named squid_ldap_auth). Some of the 3.x don't support NTLM 
credentials well.


What browser is the problem showing up with? browser other than IE have 
a hard time locating the Windows login credentials to use SSO.


Amos


Re: [squid-users] Maximum disk cache size per worker

2013-03-22 Thread Amos Jeffries

On 22/03/2013 7:21 p.m., Sokvantha YOUK wrote:

Dear Amos,

I am pretty sure love to go down to try SMP equivalent of a CARP
peering. Please guide me.


The design is laid out here with config files for the pre-SMP versions 
of Squid:

 http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem

The SMP worker version of that is only slightly different. There is only 
one squid started using a single main squid.conf containing a series 
of if-conditions assigning each worker to a frontend or backend 
configuration file like so:


squid.conf:
  workers 3
  if ${process_number} = 1
  include /etc/squid/backend.conf
  endif
  if ${process_number} = 2
  include /etc/squid/backend.conf
  endif
  if ${process_number} = 3
  include /etc/squid/frontend.conf
endif


The backend can share one config file by using ${process_number} in all 
the unique '1' , '2' places (hostname, cache_dir, last digit of port 
number etc)


The frontend must reference the backend without using ${process_number}. 
And can also have rock cache_dir to service small objects out of 
quickly, although YMMV on this.


You can expand this out with multiple frontend if you like, or with more 
than 2 backends.



Amos



---
Regards,
Vantha

On Fri, Mar 22, 2013 at 1:13 PM, Amos Jeffries squ...@treenet.co.nz wrote:

On 22/03/2013 4:39 p.m., Alex Rousskov wrote:

On 03/21/2013 08:11 PM, Sokvantha YOUK wrote:


Thank you for your advice. If I want large files to be cached when it
fist seen by worker, My config should change to first worker that see
large file and cache it else left it over to remaining worker for rock
store worker?

Your OS assigns workers to incoming connections. Squid does not control
that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached.



I don't want cached content to be duplicated
among AUFS cache_dir and I want to use the advantage of rock store
which can be shared within worker on SMP deployment.

The above is not yet possible using official code. Your options include:

1. Do not cache large files.

2. Cache large files in isolated per-worker ufs-based cache_dirs,
 one ufs-based cache_dir per worker,
 suffering from false misses and duplicates.
 I believe somebody reported success with this approach. YMMV.

3. Cache large files in SMP-aware rock cache_dirs,
 using unofficial experimental Large Rock branch
 that does not limit the size of cached objects to 32KB:
 http://wiki.squid-cache.org/Features/LargeRockStore


4. Setup the SMP equivalent of a CARP peering hierarchy with the frontend
workers using shared rock caches and the backend using UFS. This minimizes
cache duplication. But in the current SMP code requires disabling loop
detection (probably not a good thing) and some advanced configuration
trickery.
If you want to actually go down that path let me know and I'll put the
details together.

Amos







Fwd: [squid-users] Eliminate PopUP authentication for web Windows Users

2013-03-22 Thread Carlos Daniel Perez
Squid Version 3.1.19
Web Browser IE and Firefox





On 22/03/2013 11:18 a.m., Leonardo Rodrigues wrote:


 basic authentication type will always prompt for username/password, 
 there's nothing wrong with it and no way to avoid it nor 'fix' it as there's 
 nothing wrong at all


Not true. There is no more or less reason for Basic auth scheme to
cause a popup than any other. If the browser is able to find
credentials that will work against the proxy it can send them without
a popup asking for others. This is true for *all* authentication
types. How the browser gets credentials is all well outside the scope
of Squid interaction. User popup is one potential source of
credentials amongst many.



 if your users are authenticated in your domain and you want squid do 
 'automagically' use those credentials for web surfing, then you'll have to 
 change your authentication type to ntlm or digest or negotiate.

 i have LOTS of squid boxes authenticanting on ADs using ntlm authentication 
 type. It's a lot more complicated to configure than basic type but, once 
 configured, it works just fine and simply.


On the other hand NTLM is officially deprecated more than 10 years ago
and officially removed from the last several generations of MS
products. Carlos, if you don't already know and use NTLM try to go
straight to Kerberos with the Negotiate auth scheme.



 Em 21/03/13 18:45, Carlos Daniel Perez escreveu:

 Hi,

 I have a Squid server configured to make querys in one ActiveDirectory
 server trough squid_ldap_group. The query it's OK and authenticated users
 can surf the web. But, my users need to put their users and password when
 open a browser.

 [ ... ]
 My squid_ldap_auth line is: auth_param basic program
 /usr/lib/squid3/squid_ldap_auth -R -d -b dc=enterprise,dc=com -D
 cn=support,cn=Users,dc=enterprise,dc=com -w 12345 -f sAMAccountName=%s
 -h
 192.168.2.1



What traffic is going through? I think that helper does not strip the
Windows realm off the username if the browser is sending the NTLM
credentials across Basic scheme.

What version of Squid are you using (looks old if it still contains
binary named squid_ldap_auth). Some of the 3.x don't support NTLM
credentials well.

What browser is the problem showing up with? browser other than IE
have a hard time locating the Windows login credentials to use SSO.

Amos


Re: [squid-users] rock squid -k reconfigure

2013-03-22 Thread Alexandre Chappaz
Hi,

investigating on this issue, it appears that the problem comes from
the disker ID in the SwapDir object.
Added these debug lines in SwapDir::active()

...
// we are inside a disker dedicated to this disk
debugs(3,1,SwapDir::active :: KidIdentifier =   KidIdentifier 
disker =disker  . );
if (KidIdentifier == disker)
return true;



it appears that the disker is wrong after running a squid -k
reconfigure, and hence the active() function returns false.


with a fresh start :

2013/03/22 11:30:29 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:29 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:29 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.
2013/03/22 11:30:29 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.
2013/03/22 11:30:30 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:30 kid3| SwapDir::active :: KidIdentifier = 3disker =  2.
2013/03/22 11:30:30 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.
2013/03/22 11:30:30 kid2| SwapDir::active :: KidIdentifier = 2disker =  2.

with

[root@tv alex]# ps aux|grep squid

root 20320  0.0  0.0 1182724 2096 ?Ss   11:30   0:00 squid
-f /etc/squid/squid.conf
proxy20322  0.1  0.2 1183196 11220 ?   S11:30   0:00
(squid-coord-3) -f /etc/squid/squid.conf
proxy20323  0.0  0.2 1184180 11640 ?   S11:30   0:00
(squid-disk-2) -f /etc/squid/squid.conf
proxy20324  0.0  0.2 1188276 11484 ?   S11:30   0:00
(squid-1) -f /etc/squid/squid.conf




after issuing the squid -k reconfigure, the output of the ps is
identical, but the diskerID is set to 3.

2013/03/22 11:31:46 kid3| SwapDir::active :: KidIdentifier = 3disker =  3.
2013/03/22 11:31:46 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.
2013/03/22 11:31:46 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.
2013/03/22 11:31:47 kid3| SwapDir::active :: KidIdentifier = 3disker =  3.
2013/03/22 11:31:47 kid3| SwapDir::active :: KidIdentifier = 3disker =  3.
2013/03/22 11:31:47 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.
2013/03/22 11:31:47 kid2| SwapDir::active :: KidIdentifier = 2disker =  3.


Trying to trace the problem futher, the disker ID seems to be given
in the contructor of the SwapDir,

SwapDir::SwapDir(char const *aType): theType(aType),
max_size(0), min_objsize(0), max_objsize (-1),
path(NULL), index(-1), disker(-1),
repl(NULL), removals(0), scanned(0),
cleanLog(NULL)
{
fs.blksize = 1024;
}



Any hints on where is the right call for creation ( or re-creation )
of this object after a reconfigure?


Thanks
Alex

2013/3/18 Alex Rousskov rouss...@measurement-factory.com:
 On 03/18/2013 09:18 AM, Alexandre Chappaz wrote:
 Hi,

 Im am using squid 3.2.8-20130304-r11795 with SMP  a rock dir configured.
 After a fresh start, cachemanager:storedir reports :

 by kid5 {
 Store Directory Statistics:
 Store Entries  : 52
 Maximum Swap Size  : 8388608 KB
 Current Store Swap Size: 28176.00 KB
 Current Capacity   : 0.34% used, 99.66% free

 Store Directory #0 (rock): /var/cache/squid/mem/
 FS Block Size 1024 Bytes

 Maximum Size: 8388608 KB
 Current Size: 28176.00 KB 0.34%
 Maximum entries:262143
 Current entries:   880 0.34%
 Pending operations: 1 out of 0
 Flags:
 } by kid5



 for the rock cache_dir.


 After a squid -k reconfigure, without any change in the squid.conf,
 the cachemanager is reporting this :

 by kid5 {
 Store Directory Statistics:
 Store Entries  : 52
 Maximum Swap Size  : 0 KB
 Current Store Swap Size: 0.00 KB
 Current Capacity   : 0.00% used, 0.00% free

 } by kid5



 Is this only a problem with the reporting? Is the rock cachedir still
 in use after the reconfigure / is there a way to check if it is still
 in use?


 Please see Bug 3774. It may be related to your problem.

http://bugs.squid-cache.org/show_bug.cgi?id=3774

 Alex.



[squid-users] Fwd: Hye I have some problems with my cache

2013-03-22 Thread Olivier Calzi
Hye Everyone,

Thanks for reading me.

I'm working in a firm and we have a situation with squid on one of
ours squidserveur.
There are the same squid.conf except for the initialisation of Squidguard.

The problem is:
- My cache grow up again and again and soon my drive will be full and
we don't understand why.

Our squid is:

Package: squid
Status: install ok installed
Priority: optional
Section: web
Installed-Size: 1876
Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
Architecture: i386
Version: 2.7.STABLE7-1ubuntu12.6

My squid.conf:


http_port 192.168.1.244:3128

# Ajout pour le logiciel de compta qui n'accept pas les retours http
417 sur des requetes en http 1.1 alors qu'il devrait
ignore_expect_100 on

hierarchy_stoplist cgi-bin ?

acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

cache_dir ufs /var/cache/squid 1024 16 256

hosts_file /etc/hosts




refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320


acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
#IP autorise a telecharger
acl localhost src 127.0.0.1/255.255.255.255

#acl adm_downloader src 192.168.2.104
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 8080 1 # https, snews + https de Peexter
acl SSL_ports port 873 # rsync
acl SSL_ports port 9433 # S@My
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT



http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl lan src 192.168.1.0/24

http_access allow lan

http_access deny all

http_reply_access allow all

icp_access allow all

coredump_dir /var/spool/squid
#redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf


Thanks again for reading me.
Olivier Calzi


Re: [squid-users] Eliminate PopUP authentication for web Windows Users

2013-03-22 Thread Delton
You can see an example of authentication using Kerberos here 
http://www.howtoforge.com/debian-squeeze-squid-kerberos-ldap-authentication-active-directory-integration-and-cyfin-reporter

or here http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

Em 21/03/2013 19:18, Leonardo Rodrigues escreveu:


basic authentication type will always prompt for 
username/password, there's nothing wrong with it and no way to avoid 
it nor 'fix' it as there's nothing wrong at all


if your users are authenticated in your domain and you want squid 
do 'automagically' use those credentials for web surfing, then you'll 
have to change your authentication type to ntlm or digest or negotiate.


i have LOTS of squid boxes authenticanting on ADs using ntlm 
authentication type. It's a lot more complicated to configure than 
basic type but, once configured, it works just fine and simply.



Em 21/03/13 18:45, Carlos Daniel Perez escreveu:

Hi,

I have a Squid server configured to make querys in one ActiveDirectory
server trough squid_ldap_group. The query it's OK and authenticated 
users
can surf the web. But, my users need to put their users and password 
when

open a browser.

[ ... ]
My squid_ldap_auth line is: auth_param basic program
/usr/lib/squid3/squid_ldap_auth -R -d -b dc=enterprise,dc=com -D
cn=support,cn=Users,dc=enterprise,dc=com -w 12345 -f sAMAccountName=%s
-h
192.168.2.1








Re: [squid-users] Why is this un-cacheable?

2013-03-22 Thread Eliezer Croitoru




 Original Message 
Subject:Re: [squid-users] Why is this un-cacheable?
Date:   Fri, 22 Mar 2013 11:09:52 +0200
From:   Eliezer Croitoru elie...@ngtech.co.il
To: squid-users@squid-cache.org



On 03/22/2013 10:04 AM, csn233 wrote:

URL:http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/en_US/AdbeRdr950_en_US.exe

It shows a MISS, regardless of how I tweak the refresh_pattern,
including the adding of all the override* and ignore* options:

Last-Modified: Wed, 04 Jan 2012 07:08:53 GMT
...
X-Cache: MISS from ...
X-Cache-Lookup: MISS from ...


What have I missed, so to speak?

http://redbot.org/

will help you.

Regards,
Eliezer




[squid-users] delay_pools 5 with AD groups ( external acl )

2013-03-22 Thread Rupesh

Dear All,

I just have small query.

I have AD server and groups created there. Now requirement is to limit 
bandwidth bases on groups.


external_acl_type wbinfo_group_helper_2 ttl=0 concurrency=5 %LOGIN 
/usr/lib64/squid/wbinfo_group.pl -d
acl test external wbinfo_group_helper_2 medical

Current delay pool setting is:-

delay_pools 1
delay_class 1 4
delay_parameters 1 32000/32000 8000/8000 600/64000 16000/16000
delay_access 1 allow test
delay_access 1 deny all

I initially tried with above config but it didn't work. After doing some 
more search, i came to know that class 5 should be use to work it with 
external acl but i still have confusion for creating delay_pools.


Following is the details i found from

/usr/share/doc/squid-3.1.10/squid.conf.documented.

class 5  Requests are grouped according their tag (see external_acl's tag= 
reply).

Can anyone please help me to prepare delay pools for this.

Thanks,
Rupesh



Re: [squid-users] Why is this un-cacheable?

2013-03-22 Thread csn233
On Fri, Mar 22, 2013 at 8:36 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 http://redbot.org/

 will help you.

 Regards,
 Eliezer


Ok, I thought this looks ok?, ie cache-able?

Caching
The resource last changed 1 year 78 days ago.
This response allows all caches to store it.
This response allows a cache to assign its own freshness lifetime.


Re: [squid-users] Fwd: Hye I have some problems with my cache

2013-03-22 Thread Squidblacklist

You have your cache set to 1024mb in size, this is very small, it is no
wonder it is filling quickly. I would increase that.



On Fri, 22 Mar 2013 11:40:24 +0100
Olivier Calzi olivierca...@gmail.com wrote:

 Hye Everyone,
 
 Thanks for reading me.
 
 I'm working in a firm and we have a situation with squid on one of
 ours squidserveur.
 There are the same squid.conf except for the initialisation of
 Squidguard.
 
 The problem is:
 - My cache grow up again and again and soon my drive will be full and
 we don't understand why.
 
 Our squid is:
 
 Package: squid
 Status: install ok installed
 Priority: optional
 Section: web
 Installed-Size: 1876
 Maintainer: Ubuntu Developers ubuntu-devel-disc...@lists.ubuntu.com
 Architecture: i386
 Version: 2.7.STABLE7-1ubuntu12.6
 
 My squid.conf:
 
 
 http_port 192.168.1.244:3128
 
 # Ajout pour le logiciel de compta qui n'accept pas les retours http
 417 sur des requetes en http 1.1 alors qu'il devrait
 ignore_expect_100 on
 
 hierarchy_stoplist cgi-bin ?
 
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY
 
 cache_dir ufs /var/cache/squid 1024 16 256
 
 hosts_file /etc/hosts
 
 
 
 
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 
 
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 #IP autorise a telecharger
 acl localhost src 127.0.0.1/255.255.255.255
 
 #acl adm_downloader src 192.168.2.104
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443 563 8080 1 # https, snews + https de
 Peexter acl SSL_ports port 873 # rsync
 acl SSL_ports port 9433 # S@My
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 563 # https, snews
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl Safe_ports port 631 # cups
 acl Safe_ports port 873 # rsync
 acl Safe_ports port 901 # SWAT
 acl purge method PURGE
 acl CONNECT method CONNECT
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl Safe_ports port 631 # cups
 acl Safe_ports port 873 # rsync
 acl Safe_ports port 901 # SWAT
 acl purge method PURGE
 acl CONNECT method CONNECT
 
 
 
 http_access allow manager localhost
 http_access deny manager
 http_access allow purge localhost
 http_access deny purge
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 
 acl lan src 192.168.1.0/24
 
 http_access allow lan
 
 http_access deny all
 
 http_reply_access allow all
 
 icp_access allow all
 
 coredump_dir /var/spool/squid
 #redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
 
 
 Thanks again for reading me.
 Olivier Calzi
 



-
Signed,

Fix Nichols

http://www.squidblacklist.org


[squid-users] FW: Squid Stopping

2013-03-22 Thread Steven Morton
Hi

Can anyone help with a small problem, since updating Centos to version 6.4
squid stops every now and then and I have to stop and restart. I am using it
with squidguard.

I have done a fresh install of Centos and still have the same problem. Every
time it happens I get this error in the cache log:

CPU Usage: 52.078 seconds = 33.264 user + 18.814 sys
Maximum Resident Size: 562752 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
   total space in arena:  135540 KB
   Ordinary blocks:   133506 KB    284 blks
   Small blocks:   0 KB  6 blks
   Holding blocks:  1612 KB  5 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:    2033 KB
   Total in use:  135118 KB 100%
   Total free:  2033 KB 2%
2013/03/22 13:21:05| Open FD UNSTARTED 8 DNS Socket IPv6
2013/03/22 13:21:05| Open FD READ/WRITE    9 DNS Socket IPv4
2013/03/22 13:21:05| Open FD READ/WRITE   10 squidGuard #1
2013/03/22 13:21:05| Open FD READ/WRITE   11
customer.fpsdistribution.co.uk:443
2013/03/22 13:21:05| Open FD READ/WRITE   12 Waiting for next request
2013/03/22 13:21:05| Open FD READ/WRITE   14
customer.fpsdistribution.co.uk:443
2013/03/22 13:21:05| Open FD READ/WRITE   15 Waiting for next request
2013/03/22 13:21:05| Open FD READ/WRITE   16 Waiting for next request
2013/03/22 13:21:05| Open FD READ/WRITE   17 squidGuard #2
2013/03/22 13:21:05| Open FD READ/WRITE   18 clients4.google.com:443
2013/03/22 13:21:05| Open FD READ/WRITE   19
customer.fpsdistribution.co.uk:443
2013/03/22 13:21:05| Open FD READ/WRITE   20
customer.fpsdistribution.co.uk:443
2013/03/22 13:21:05| Open FD READ/WRITE   21 squidGuard #3

Any help appreciated.

Thanks




[squid-users] squid 3.2 and error_map equivalent

2013-03-22 Thread Martin Sperl
Hi!

Is there an equivalent for the squid 2.X error_map functionality?

Depending on context we would need to provide different error messages and text 
customizations.

The error_map config would have been an ideal match for my requirements - a 
small apache with mod_rewrite could easily get abused for that (including a 
policy framework which pages to deliver for which portion of the reverse-proxy 
URL, which virtual host does get accessed,...)

Is there any different means to do that in an elegant manner that does not 
require icap or similar?

Thanks,
    Martin


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [squid-users] Fwd: Hye I have some problems with my cache

2013-03-22 Thread Squidblacklist
Also I use this directive in my conf to ensure squid will purge older
unused cache and not run out of disk space

cache_swap_low 90
cache_swap_high 95

Not sure if you need that, but it works for me.




On Fri, 22 Mar 2013 07:40:01 -0700
Squidblacklist webmas...@squidblacklist.org wrote:

 
 You have your cache set to 1024mb in size, this is very small, it is
 no wonder it is filling quickly. I would increase that.
 
 
 
 On Fri, 22 Mar 2013 11:40:24 +0100
 Olivier Calzi olivierca...@gmail.com wrote:
 
  Hye Everyone,
  
  Thanks for reading me.
  
  I'm working in a firm and we have a situation with squid on one of
  ours squidserveur.
  There are the same squid.conf except for the initialisation of
  Squidguard.
  
  The problem is:
  - My cache grow up again and again and soon my drive will be full
  and we don't understand why.
  
  Our squid is:
  
  Package: squid
  Status: install ok installed
  Priority: optional
  Section: web
  Installed-Size: 1876
  Maintainer: Ubuntu Developers
  ubuntu-devel-disc...@lists.ubuntu.com Architecture: i386
  Version: 2.7.STABLE7-1ubuntu12.6
  
  My squid.conf:
  
  
  http_port 192.168.1.244:3128
  
  # Ajout pour le logiciel de compta qui n'accept pas les retours http
  417 sur des requetes en http 1.1 alors qu'il devrait
  ignore_expect_100 on
  
  hierarchy_stoplist cgi-bin ?
  
  acl QUERY urlpath_regex cgi-bin \?
  no_cache deny QUERY
  
  cache_dir ufs /var/cache/squid 1024 16 256
  
  hosts_file /etc/hosts
  
  
  
  
  refresh_pattern ^ftp:   144020% 10080
  refresh_pattern ^gopher:14400%  1440
  refresh_pattern .   0   20% 4320
  
  
  acl all src 0.0.0.0/0.0.0.0
  acl manager proto cache_object
  #IP autorise a telecharger
  acl localhost src 127.0.0.1/255.255.255.255
  
  #acl adm_downloader src 192.168.2.104
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443 563 8080 1 # https, snews + https de
  Peexter acl SSL_ports port 873 # rsync
  acl SSL_ports port 9433 # S@My
  acl Safe_ports port 80 # http
  acl Safe_ports port 21 # ftp
  acl Safe_ports port 443 563 # https, snews
  acl Safe_ports port 70 # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535 # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl Safe_ports port 631 # cups
  acl Safe_ports port 873 # rsync
  acl Safe_ports port 901 # SWAT
  acl purge method PURGE
  acl CONNECT method CONNECT
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl Safe_ports port 631 # cups
  acl Safe_ports port 873 # rsync
  acl Safe_ports port 901 # SWAT
  acl purge method PURGE
  acl CONNECT method CONNECT
  
  
  
  http_access allow manager localhost
  http_access deny manager
  http_access allow purge localhost
  http_access deny purge
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  
  acl lan src 192.168.1.0/24
  
  http_access allow lan
  
  http_access deny all
  
  http_reply_access allow all
  
  icp_access allow all
  
  coredump_dir /var/spool/squid
  #redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
  
  
  Thanks again for reading me.
  Olivier Calzi
  
 
 
 
 -
 Signed,
 
 Fix Nichols
 
 http://www.squidblacklist.org



-
Signed,

Fix Nichols

http://www.squidblacklist.org


Re: [squid-users] Fwd: Hye I have some problems with my cache

2013-03-22 Thread Olivier Calzi
Thanks ofr the reply i will test this asap.

Have a nice week-end


2013/3/22 Squidblacklist webmas...@squidblacklist.org:
 Also I use this directive in my conf to ensure squid will purge older
 unused cache and not run out of disk space

 cache_swap_low 90
 cache_swap_high 95

 Not sure if you need that, but it works for me.




 On Fri, 22 Mar 2013 07:40:01 -0700
 Squidblacklist webmas...@squidblacklist.org wrote:


 You have your cache set to 1024mb in size, this is very small, it is
 no wonder it is filling quickly. I would increase that.



 On Fri, 22 Mar 2013 11:40:24 +0100
 Olivier Calzi olivierca...@gmail.com wrote:

  Hye Everyone,
 
  Thanks for reading me.
 
  I'm working in a firm and we have a situation with squid on one of
  ours squidserveur.
  There are the same squid.conf except for the initialisation of
  Squidguard.
 
  The problem is:
  - My cache grow up again and again and soon my drive will be full
  and we don't understand why.
 
  Our squid is:
 
  Package: squid
  Status: install ok installed
  Priority: optional
  Section: web
  Installed-Size: 1876
  Maintainer: Ubuntu Developers
  ubuntu-devel-disc...@lists.ubuntu.com Architecture: i386
  Version: 2.7.STABLE7-1ubuntu12.6
 
  My squid.conf:
 
 
  http_port 192.168.1.244:3128
 
  # Ajout pour le logiciel de compta qui n'accept pas les retours http
  417 sur des requetes en http 1.1 alors qu'il devrait
  ignore_expect_100 on
 
  hierarchy_stoplist cgi-bin ?
 
  acl QUERY urlpath_regex cgi-bin \?
  no_cache deny QUERY
 
  cache_dir ufs /var/cache/squid 1024 16 256
 
  hosts_file /etc/hosts
 
 
 
 
  refresh_pattern ^ftp:   144020% 10080
  refresh_pattern ^gopher:14400%  1440
  refresh_pattern .   0   20% 4320
 
 
  acl all src 0.0.0.0/0.0.0.0
  acl manager proto cache_object
  #IP autorise a telecharger
  acl localhost src 127.0.0.1/255.255.255.255
 
  #acl adm_downloader src 192.168.2.104
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443 563 8080 1 # https, snews + https de
  Peexter acl SSL_ports port 873 # rsync
  acl SSL_ports port 9433 # S@My
  acl Safe_ports port 80 # http
  acl Safe_ports port 21 # ftp
  acl Safe_ports port 443 563 # https, snews
  acl Safe_ports port 70 # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535 # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl Safe_ports port 631 # cups
  acl Safe_ports port 873 # rsync
  acl Safe_ports port 901 # SWAT
  acl purge method PURGE
  acl CONNECT method CONNECT
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl Safe_ports port 631 # cups
  acl Safe_ports port 873 # rsync
  acl Safe_ports port 901 # SWAT
  acl purge method PURGE
  acl CONNECT method CONNECT
 
 
 
  http_access allow manager localhost
  http_access deny manager
  http_access allow purge localhost
  http_access deny purge
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
 
  acl lan src 192.168.1.0/24
 
  http_access allow lan
 
  http_access deny all
 
  http_reply_access allow all
 
  icp_access allow all
 
  coredump_dir /var/spool/squid
  #redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
 
 
  Thanks again for reading me.
  Olivier Calzi
 



 -
 Signed,

 Fix Nichols

 http://www.squidblacklist.org



 -
 Signed,

 Fix Nichols

 http://www.squidblacklist.org



-- 
Cordialement
Olivier Calzi


[squid-users] issues with acl time squid http proxy 3.0

2013-03-22 Thread Orlando Camarillo
Hi brothers.

I have running Squid HTTP Proxy 3.0 over Debian, everything is working
fine, just i got weird behavior with the acl time, every single day
stop working at 1723 hrs.
this is my acl for deny sites: acl dstRestricted url_regex
/etc/squid3/dstRestricted,

and aply the acl like this: http_access allow lan !dstRestricted

and here is my acl time:

acl t1 time MTWHF 07:00-21:00
acl t2 time MTWHF 14:00-15:30
acl t3 MTWHF 21:00-24:00
acl t4 time MTWHF 00:00-07:00

and the issue happend between 1723 hrs. and  2100 hrs, so the blocked
sites becomes unlocked and the users can get access it. i tryed with a
lot of combinations on acl t1, t2, t3, t4, but never works.

I wonder if is possible to get some help from you guys.
I´ll be appreciate.
Thanks in advance.
Jah bless.

--
User Linux# 482385


Re: [squid-users] Re: Maximum disk cache size per worker

2013-03-22 Thread Alex Rousskov
On 03/22/2013 01:43 AM, babajaga wrote:

 Your OS assigns workers to incoming connections. Squid does not
 control that assignment. For the purposes of designing your
 storage, you may assume that the next request goes to a random
 worker. Thus, each of your workers must cache large files for files
 to be reliably cached.


 But, I think such a config SHOULD avoid duplication:
 
 if ${process_number}=1
 cache_dir  aufs /cache4/squid/${process_number} 17 32 256
 min-size=31001 max-size=20
 cache_dir  aufs /cache5/squid/${process_number} 17 32 256
 min-size=21 max-size=40
 cache_dir  aufs /cache6/squid/${process_number} 17 32 256
 min-size=41 max-size=80
 cache_dir  aufs /cache7/squid/${process_number} 17 32 256
 min-size=80
 endif 


Well, yes, restricting large file caching to one worker avoids
duplication at the expense of not caching any large files for all other
workers. Since all workers get requests for large files, all workers
should cache them or none should. And by cache, I mean store them in
the cache and get them from the cache.

With the above configuration, only one worker will store large files and
serve large hits. All other workers will not store large files and will
not serve large hits.

This is why the above configuration does not work well and most likely
does not do what the admin indented it to do. It does avoid duplication
though :-).


Alex.



[squid-users] Eliminate PopUP authentication for web Windows Users

2013-03-22 Thread Carlos Daniel Perez
Hi,

I configure Squid with Kerberos athentication, but when a client with
windows 7 try to surf web appear:

== /var/log/squid3/cache.log ==
2013/03/22 16:07:09| negotiate_wrapper: Got 'YR
YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
from squid (length: 219).
2013/03/22 16:07:09| negotiate_wrapper: Decode
'YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
(decoded length: 161).
2013/03/22 16:07:09| negotiate_wrapper: received Kerberos token
2013/03/22 16:07:09| squid_kerb_auth: DEBUG: Got 'YR
YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
from squid (length: 219).
2013/03/22 16:07:09| squid_kerb_auth: DEBUG: Decode
'YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
(decoded length: 161).
2013/03/22 16:07:09| squid_kerb_auth: ERROR: gss_accept_sec_context()
failed: An unsupported mechanism was requested.
2013/03/22 16:07:09| negotiate_wrapper: Return 'BH
gss_accept_sec_context() failed: An unsupported mechanism was
requested.
'
2013/03/22 16:07:09| authenticateNegotiateHandleReply: Error
validating user via Negotiate. Error returned 'BH
gss_accept_sec_context() failed: An unsupported mechanism was
requested. '

if i put the username (in format username and not in Domain\username
format) all is fine and client can surf... but i need authentication
without popup...

If a Windows XP client try to surf this error appear:

== /var/log/squid3/cache.log ==
2013/03/22 16:07:39| negotiate_wrapper: Got 'KK
TlRMTVNTUAADGAAYAHoYABgAkgYABgBIEgASAE4aABoAYACqBYKIogUBKAoPUwBWAFEAZABwAGEAbABhAGMAaQBvAHMAQwAtAEkATgBGAE8AUgBNAEEAVABJAEMAQQCnfWU6vlE1SACf6zTftZnnH1TtUXw/0u3x1D7nej1u78M='
from squid (length: 231).
2013/03/22 16:07:39| negotiate_wrapper: Decode
'TlRMTVNTUAADGAAYAHoYABgAkgYABgBIEgASAE4aABoAYACqBYKIogUBKAoPUwBWAFEAZABwAGEAbABhAGMAaQBvAHMAQwAtAEkATgBGAE8AUgBNAEEAVABJAEMAQQCnfWU6vlE1SACf6zTftZnnH1TtUXw/0u3x1D7nej1u78M='
(decoded length: 170).
2013/03/22 16:07:39| negotiate_wrapper: received type 120 NTLM token
2013/03/22 16:07:39| negotiate_wrapper: Return 'NA = NT_STATUS_UNSUCCESSFUL

Doesn't work if i put the username like Windows 7...

The first lines of my squid.conf have:


### negotiate kerberos and ntlm authentication
auth_param negotiate program /usr/local/bin/negotiate_wrapper -d
--ntlm /usr/bin/ntlm_auth --diagnostics
--helper-protocol=squid-2.5-ntlmssp --domain=ENT --kerberos
/usr/lib/squid3/squid_kerb_auth -d -s HTTP/squid-proxy.enterprise.com
auth_param negotiate children 10
auth_param negotiate keep_alive off

### pure ntlm authentication
auth_param ntlm program /usr/bin/ntlm_auth --diagnostics
--helper-protocol=squid-2.5-ntlmssp --domain=ENT
auth_param ntlm children 10
auth_param ntlm keep_alive off



auth_param basic program /usr/lib/squid3/squid_ldap_auth -R \
-b dc=enterprise,dc=com \
-D sopo...@enterprise.com \
-w 12345 \
-f sAMAccountName=%s \
-h svq-wsus.enterprise.com
auth_param basic children 10
auth_param basic realm Internet Proxy
auth_param basic credentialsttl 1 minute

external_acl_type internet_users %LOGIN
/usr/lib/squid3/squid_ldap_group -R -K -S \
-b dc=enterprise,dc=com \
-D sopo...@enterprise.com \
-w 12345 \
-f 
((objectclass=person)(sAMAccountName=%v)(memberof=ou=%a,ou=Vip,dc=enterprise,dc=com))
\
-h svq-wsus.enterprise.com

I create my .keytab without problem follow this guide:

http://www.howtoforge.com/debian-squeeze-squid-kerberos-ldap-authentication-active-directory-integration-and-cyfin-reporter
and http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos


Why happend these errors? PD. My domain is ENTERPRISE.COM and the
users use ENT\username to acces Domain and network resources...

Thank you very much!


[squid-users] Worker-specific configurations considered evil

2013-03-22 Thread Alex Rousskov
Hello,

Please do not interpret the following as a hidden criticism of any
specific solution being discussed on the list. I just want to make an
important _general_ point after seeing _many_ attempts to use SMP macros
and conditionals in squid.conf.

IMO, official SMP support is limited to cases not using worker-specific
configurations.(*)


Any worker-specific configuration is essentially a hack or workaround. A
small hack is often the best temporary solution. The bigger the hack,
the more likely it will go wrong sooner. Something like using
${process_number} macro in access log format to see which worker got the
request is a small hack. On the opposite end of the spectrum is a huge
hack of giving each worker its own configuration file using conditionals.

SMP macros and conditionals are very handy tools. However, when deciding
whether to make worker-specific configs, please keep in mind that such
configs may not be officially supported (now and especially in the
future). In other words, any my worker-specific config does not work or
stopped working support request is far less likely to be officially
addressed than a similar request not involving a worker-specific
configuration.

In some cases, we will have to break currently working worker-specific
configurations to make progress with SMP support.

SMP support goal is to have a single Squid configuration for all
workers, with no macros or conditionals. SMP-unaware features may
require hacks or exceptions to work in SMP environment. The number of
such features decreases with time. When a feature transitions from
SMP-unaware to SMP-aware, its worker-specific configuration may break.
This does not mean that one must not use SMP-unaware features in SMP
environments, but one should understand the trade-offs involved before
committing to their long-term use.


Thank you,

Alex.
(*) That is not the only SMP limitation, of course.


Re: [squid-users] Why is this un-cacheable?

2013-03-22 Thread Amos Jeffries

On 22/03/2013 9:04 p.m., csn233 wrote:

URL: 
http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/en_US/AdbeRdr950_en_US.exe

It shows a MISS, regardless of how I tweak the refresh_pattern,
including the adding of all the override* and ignore* options:

Last-Modified: Wed, 04 Jan 2012 07:08:53 GMT
...
X-Cache: MISS from ...
X-Cache-Lookup: MISS from ...


What have I missed, so to speak?


The default Squid settings will cache it for a whole 7 days unless you. 
No special configuration required.

http://redbot.org/?uri=http%3A%2F%2Farmdl.adobe.com%2Fpub%2Fadobe%2Freader%2Fwin%2F9.x%2F9.5.0%2Fen_US%2FAdbeRdr950_en_US.exe

You need to supply a lot more details about the problem if you are to 
get help.

 Squid version? (squid -v output)
 What HTTP client software are you testing it with?
 How are you testing? If you are doing anything special like 
interception proxy, please also include the http(s) port settings you 
are using and indicate which port you are testing with.

 Is it *always* MISS or just mostly?
 What is the full access.log lines look like for this object fetches? 
(obfuscate any sensative info consistently so we can still see if one 
client does a sequence of requests).

 What refresh_pattern settings do you have?
 What maximum_object_* and cache_dir, and cache_mem settings do you 
have in your squid.conf?
 What cache allow/deny directive settings do you have? also in clude 
the full definition of any relevant ACLs used on the cache allow/deny 
directive lines.


Amos


Re: [squid-users] acl time stop at specified hour

2013-03-22 Thread Amos Jeffries

On 22/03/2013 8:34 p.m., Orlando Camarillo wrote:

Hi brothers.

I have running Squid HTTP Proxy 3.0 over Debian, everything is working
fine, just i got weird behavior with the acl time, every single day
stop working at 1723 hrs.


1723 UTC, GMT, or local time? if local time what timezone are you in.


this is my acl for deny sites: acl dstRestricted url_regex
/etc/squid3/dstRestricted,

and aply the acl like this: http_access allow lan !dstRestricted

and here is my acl time:

acl t1 time MTWHF 07:00-21:00
acl t2 time MTWHF 14:00-15:30
acl t3 MTWHF 21:00-24:00


Missing time. I assume this is a typo on the email rather than the 
actual config?



acl t4 time MTWHF 00:00-07:00

and the issue happend between 1723 hrs. and  2100 hrs, so the blocked
sites becomes unlocked and the users can get access it. i tryed with a
lot of combinations on acl t1, t2, t3, t4, but never works.


All of the above ACLs are expected to match to timespans outside of the 
one you are having problems with. What is the *full* sequence ofyour 
config file please?


Amos


Re: [squid-users] squid 3.2 and error_map equivalent

2013-03-22 Thread Amos Jeffries

On 23/03/2013 4:57 a.m., Martin Sperl wrote:

Hi!

Is there an equivalent for the squid 2.X error_map functionality?

Depending on context we would need to provide different error messages and text 
customizations.

The error_map config would have been an ideal match for my requirements - a small apache 
with mod_rewrite could easily get abused for that (including a policy 
framework which pages to deliver for which portion of the reverse-proxy URL, which 
virtual host does get accessed,...)

Is there any different means to do that in an elegant manner that does not 
require icap or similar?


Not exactly. You are the first person to ask for it in that last 3 
years, so no emphasis was made on porting the feature across.


error_map simpy replaces *all* upstream responses with the mapped status 
code, using the same custom template. So it would not seem to meet your 
depending on context requirement anyway.


Try this:
  acl 404 http_status 404
  deny_info YOUR_404_PAGE 404
  http_reply_access deny 404

... etc. Which should replace the server-provided content with 
YOUR_404_PAGE *and* allows other ACLs n the reply rule set to determine 
whether or not your page is to be mapped in.



Amos


Re: [squid-users] Fwd: Hye I have some problems with my cache

2013-03-22 Thread Amos Jeffries

On 23/03/2013 5:22 a.m., Olivier Calzi wrote:

Thanks ofr the reply i will test this asap.

Have a nice week-end


* Please also consider upgrade to Ubuntu squid3 package. 2.7 is not 
maintained or supported for several years now.


 - NP: the caching support in the most current Squid is a lot better, 
but I'm not sure the Ubuntu packaged version is new enough to include 
all that. Either way the code is a lot closer to the currently supported 
versions so tracing any issues and fixing them is a lot easier.



2013/3/22 Squidblacklist:

Also I use this directive in my conf to ensure squid will purge older
unused cache and not run out of disk space

cache_swap_low 90
cache_swap_high 95

Not sure if you need that, but it works for me.


These are the defaults. In Squid-2 they are required to be present in 
the config file.



On Fri, 22 Mar 2013 07:40:01 -0700
Squidblacklist wrote:


You have your cache set to 1024mb in size, this is very small, it is
no wonder it is filling quickly. I would increase that.



On Fri, 22 Mar 2013 11:40:24 +0100
Olivier Calzi wrote:


Hye Everyone,

Thanks for reading me.

I'm working in a firm and we have a situation with squid on one of
ours squidserveur.
There are the same squid.conf except for the initialisation of
Squidguard.

The problem is:
- My cache grow up again and again and soon my drive will be full
and we don't understand why.


How big is the drive?


Our squid is:

Package: squid
Status: install ok installed
Priority: optional
Section: web
Installed-Size: 1876
Maintainer: Ubuntu Developers
ubuntu-devel-disc...@lists.ubuntu.com Architecture: i386
Version: 2.7.STABLE7-1ubuntu12.6

My squid.conf:


http_port 192.168.1.244:3128

# Ajout pour le logiciel de compta qui n'accept pas les retours http
417 sur des requetes en http 1.1 alors qu'il devrait
ignore_expect_100 on

hierarchy_stoplist cgi-bin ?

acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


This no_cache directive has been deprecated since Squid-2.4. Remove 
the no_ portion to see what it actually does.


Also, the QUERY ACL has been not needed (and a little harmful to HIT 
ratios) since Squid-2.6. Consider removing those lines.



cache_dir ufs /var/cache/squid 1024 16 256


* Check the size of swap.state file in this directory. Also check the 
log files directory.
  If you have a very busy cache they can all grow fast and Squid 
requires more frequent squid -k rotate to clean up the swap.state 
journal and log files. Even if you are using an external log manager 
squid -k rotate is required to perform the related state cleanup and 
cache journal management.



hosts_file /etc/hosts




refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320


snip

coredump_dir /var/spool/squid


* Check this core dump directory to see if it is filling your disk with 
lots of core dumps. If so please seriously consider the Squid-3 upgrade 
and if the problem remains utilize the squid-3 core dumps to isolate the 
issue.


  - Also if you can identify the directory squid is considering its 
home directory on startup (*should* be the coredump_dir but not always 
in the older Squid) check that directory as well.



Amos


Re: [squid-users] Eliminate PopUP authentication for web Windows Users

2013-03-22 Thread Amos Jeffries

On 23/03/2013 9:52 a.m., Carlos Daniel Perez wrote:

Hi,

I configure Squid with Kerberos athentication, but when a client with
windows 7 try to surf web appear:

== /var/log/squid3/cache.log ==
2013/03/22 16:07:09| negotiate_wrapper: Got 'YR
YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
from squid (length: 219).
2013/03/22 16:07:09| negotiate_wrapper: Decode
'YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
(decoded length: 161).
2013/03/22 16:07:09| negotiate_wrapper: received Kerberos token
2013/03/22 16:07:09| squid_kerb_auth: DEBUG: Got 'YR
YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
from squid (length: 219).
2013/03/22 16:07:09| squid_kerb_auth: DEBUG: Decode
'YIGeBgYrBgEFBQKggZMwgZCgGjAYBgorBgEEAYI3AgIeBgorBgEEAYI3AgIKonIEcE5FR09FWFRTAABgcLv3Bs/GeImNryJCPliRU4J64wGv+JW11hiPEZ3knb5360uTrKKtHBe8GVif0T00OwAAYAEAAEVyfDIyRYtIv9kqa6BepAo='
(decoded length: 161).
2013/03/22 16:07:09| squid_kerb_auth: ERROR: gss_accept_sec_context()
failed: An unsupported mechanism was requested.
2013/03/22 16:07:09| negotiate_wrapper: Return 'BH
gss_accept_sec_context() failed: An unsupported mechanism was
requested.
'
2013/03/22 16:07:09| authenticateNegotiateHandleReply: Error
validating user via Negotiate. Error returned 'BH
gss_accept_sec_context() failed: An unsupported mechanism was
requested. '

if i put the username (in format username and not in Domain\username
format) all is fine and client can surf... but i need authentication
without popup...

If a Windows XP client try to surf this error appear:

== /var/log/squid3/cache.log ==
2013/03/22 16:07:39| negotiate_wrapper: Got 'KK
TlRMTVNTUAADGAAYAHoYABgAkgYABgBIEgASAE4aABoAYACqBYKIogUBKAoPUwBWAFEAZABwAGEAbABhAGMAaQBvAHMAQwAtAEkATgBGAE8AUgBNAEEAVABJAEMAQQCnfWU6vlE1SACf6zTftZnnH1TtUXw/0u3x1D7nej1u78M='
from squid (length: 231).
2013/03/22 16:07:39| negotiate_wrapper: Decode
'TlRMTVNTUAADGAAYAHoYABgAkgYABgBIEgASAE4aABoAYACqBYKIogUBKAoPUwBWAFEAZABwAGEAbABhAGMAaQBvAHMAQwAtAEkATgBGAE8AUgBNAEEAVABJAEMAQQCnfWU6vlE1SACf6zTftZnnH1TtUXw/0u3x1D7nej1u78M='
(decoded length: 170).
2013/03/22 16:07:39| negotiate_wrapper: received type 120 NTLM token
2013/03/22 16:07:39| negotiate_wrapper: Return 'NA = NT_STATUS_UNSUCCESSFUL


type 120 ?  Something is getting the decoding wrong in the helper. 
That is a type-3 (credentials, handshake complete) token.





Doesn't work if i put the username like Windows 7...

The first lines of my squid.conf have:


### negotiate kerberos and ntlm authentication
auth_param negotiate program /usr/local/bin/negotiate_wrapper -d
--ntlm /usr/bin/ntlm_auth --diagnostics
--helper-protocol=squid-2.5-ntlmssp --domain=ENT --kerberos
/usr/lib/squid3/squid_kerb_auth -d -s HTTP/squid-proxy.enterprise.com
auth_param negotiate children 10
auth_param negotiate keep_alive off

### pure ntlm authentication
auth_param ntlm program /usr/bin/ntlm_auth --diagnostics
--helper-protocol=squid-2.5-ntlmssp --domain=ENT
auth_param ntlm children 10
auth_param ntlm keep_alive off



auth_param basic program /usr/lib/squid3/squid_ldap_auth -R \
 -b dc=enterprise,dc=com \
 -D sopo...@enterprise.com \
 -w 12345 \
 -f sAMAccountName=%s \
 -h svq-wsus.enterprise.com
auth_param basic children 10
auth_param basic realm Internet Proxy
auth_param basic credentialsttl 1 minute

external_acl_type internet_users %LOGIN
/usr/lib/squid3/squid_ldap_group -R -K -S \
 -b dc=enterprise,dc=com \
 -D sopo...@enterprise.com \
 -w 12345 \
 -f 
((objectclass=person)(sAMAccountName=%v)(memberof=ou=%a,ou=Vip,dc=enterprise,dc=com))
\
 -h svq-wsus.enterprise.com

I create my .keytab without problem follow this guide:

http://www.howtoforge.com/debian-squeeze-squid-kerberos-ldap-authentication-active-directory-integration-and-cyfin-reporter
and http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos


Why happend these errors? PD. My domain is ENTERPRISE.COM and the
users use ENT\username to acces Domain and network resources...

Thank you very much!




Re: [squid-users] Maximum disk cache size per worker

2013-03-22 Thread Amos Jeffries

On 22/03/2013 11:05 p.m., Amos Jeffries wrote:

On 22/03/2013 7:21 p.m., Sokvantha YOUK wrote:

Dear Amos,

I am pretty sure love to go down to try SMP equivalent of a CARP
peering. Please guide me.


The design is laid out here with config files for the pre-SMP versions 
of Squid:

 http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem

The SMP worker version of that is only slightly different. There is 
only one squid started using a single main squid.conf containing a 
series of if-conditions assigning each worker to a frontend or backend 
configuration file like so:


I've added a slightly better form of this configuration to the wiki at:
  http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster

Amos


Re: [squid-users] Maximum disk cache size per worker

2013-03-22 Thread Sokvantha Youk

On 3/23/2013 11:19 AM, Amos Jeffries wrote:

On 22/03/2013 11:05 p.m., Amos Jeffries wrote:

On 22/03/2013 7:21 p.m., Sokvantha YOUK wrote:

Dear Amos,

I am pretty sure love to go down to try SMP equivalent of a CARP
peering. Please guide me.


The design is laid out here with config files for the pre-SMP 
versions of Squid:

 http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem

The SMP worker version of that is only slightly different. There is 
only one squid started using a single main squid.conf containing a 
series of if-conditions assigning each worker to a frontend or 
backend configuration file like so:


I've added a slightly better form of this configuration to the wiki at:
  http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster

Amos

Dear Amos,

Thank you for your Config Sample. I will give it a try with the latest 
Squid version squid-3.3.3-20130322-r12517 on Centos 6.3 x64 bits.

Will share the result :)

---
Regards,
Vantha