[squid-users] Double Authentication

2008-07-02 Thread David Valin
Hello!

 

I have set up a Squid 2.6v6 STABLE16 server in a RHEL4. Our company is under
MS Active Directory and the squid server authenticates users against AD
successfully, but we have a problem, users that are not in AD (we are still
migrating our system) cannot navigate. I have written this in the squid.conf
file
 

auth_param basic program /usr/libexec/squid/ncsa_auth
/usr/local/squid/etc/passwd (thinking about users not in AD)


auth_param ntlm program /usr/bin/ntlm_auth
-helper-protocol=squid-2.5-ntlmssp (For users in AD)


When a user in AD opens Internet Explorer or Mozilla Firefox automatically
is authenticated ( I took a look at cache.log) and is able to navigate.

When a user not in AD opens Internet Explorer or Mozilla, a small window is
prompted asking for user and password, I type a valid user and pass that is
in the file but it does not work.

This password files has been created with Webmin, another file following
some guides.

I have tried both pass files but no success with that.

 

Please I would like to know how to manage 2 different types of
authentication, the firs for users in AD and the seconds for users not in
AD.

 

Thank you very much.




Re: [squid-users] Squid ZPH patch

2008-07-02 Thread pritam

Henrik Nordstrom wrote:

http://www.squid-cache.org/Versions/v2/2.7/cfgman/
http://www.squid-cache.org/Versions/v2/2.7/RELEASENOTES.html

Regards
Henrik
  
Thanks for the link and your answers, I have updated my squid to 2.7 
from 2.6 and had COSS Filesystem working properly along with the new 
features of 2.7


Thanks
Pritam. Kathamndu





Re: [squid-users] Squid + F5 balancing doesnt work!!!

2008-07-02 Thread Adrian Chadd
2008/7/2 Jeff Peng [EMAIL PROTECTED]:

 That's right.
 F5 has 3 kinds of sessions keeping arithmetic, which are F5's patent.
 btw, is there any opensources arithmetic for keeping sessions in
 load-balance setting?

They've patented balancing based on tracking session cookies?
I'm sure the acedirector predates the f5 and had options to do that.




adrian


Re: [squid-users] Squid + F5 balancing doesnt work!!!

2008-07-02 Thread Jeff Peng
On Wed, Jul 2, 2008 at 3:58 PM, Adrian Chadd [EMAIL PROTECTED] wrote:
 2008/7/2 Jeff Peng [EMAIL PROTECTED]:

 That's right.
 F5 has 3 kinds of sessions keeping arithmetic, which are F5's patent.
 btw, is there any opensources arithmetic for keeping sessions in
 load-balance setting?

 They've patented balancing based on tracking session cookies?

I think so. But in fact I have no experience of using this setting.


-- 
Regards,
Jeff. - [EMAIL PROTECTED]


Fwd: [squid-users] Squid + F5 balancing doesnt work!!!

2008-07-02 Thread Andrew Miehs

Resent to list

Begin forwarded message:


From: Andrew Miehs [EMAIL PROTECTED]
Date: 2 July 2008 10:32:34 AM
To: chris brain [EMAIL PROTECTED]
Subject: Re: [squid-users] Squid + F5 balancing doesnt work!!!


On 02/07/2008, at 4:18 AM, chris brain wrote:

We use F5 boxes. Amos is correct. Turn on destination persistance  
for the
squid pool on the F5 unit. This will hold provided the traffic is  
consistant
over the persistance timer (the timer might have to be set quite  
large  -

i.e. hours). Depending on the type of F5 unit you will have different
methods of persistance available.

chris


You will probably want to use source ip address based persistence - as
you are not using the squid in a reverse proxy mode.

Thought on the side:... can the F5s do the authentication? hmmm

Cheers

Andrew




Re: [squid-users] udp_incoming_address and udp_outgoing_address

2008-07-02 Thread John Doe
 Looks like a false positive.
 
 Maybe you need to enable retry_on_error. Shouldn't be needed, but I
 haven't verified this in quite a while..

It only changed the 403 Forbiden to constant 200 No headers, assuming 
HTTP/0.9

  'The latest from a CentOS 5.2: squid-2.6.STABLE6-5.el5_1.2
  I guess I will have to manualy compile a STABLE20...  Or is 2.7STABLE3 
  better 
 and as stable?
 
 I would recomment 2.7.
 
 If you recompile 2.6 then make sure to get 2.6.STABLE21, or you'll miss
 some annoying bugfixes..

I compiled 2.7STABLE3 and no more problems...
Now I will just have to parse my confs for 2.6-2.7 changes.

Thx,
JD


  



Re: [squid-users] Re: SSL Client certificates

2008-07-02 Thread Henrik Nordstrom
On ons, 2008-07-02 at 00:39 +0200, Alex van Denzel wrote:
 On Tue, Jul 1, 2008 at 12:26 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
  OpenSSL also supports a directory with multiple CRLs, hashed by the
  issuing CN, and dynamic updates.
 
 Is the availability of files like hash.r0 in the capath=dir
 enough to turn CRL processing on, or is the VERIFY_CRL or
 VERIFY_CRL_ALL option to sslflags= enough?

Yes, it should actually work. But you need to enable VERIFY_CRL or
VERIFY_CRL_ALL.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid Reverse Proxy w/ SSL and IIS Server - Auth problems

2008-07-02 Thread Garry
Well, turns out that all the trouble was caused solely by the crappy 
NTLM authentication of IIS ... luckily, the application (or IIS?) could 
be changed over to basic auth, in turn everything works as expected, 
both with and without SSL ... Based on the things I've read about NTLM 
just the last couple of hours, I wonder how M$ with it's increased 
desire to get better security out can even let people still use it ...


Oh well ... thanks all for your hints ...

-gg


Re: [squid-users] Squid + F5 balancing doesnt work!!!

2008-07-02 Thread Henrik Nordstrom
On tis, 2008-07-01 at 20:25 -0500, Luis Daniel Lucio Quiroz wrote:

 1214974554.906  0 99.90.40.253 TCP_DENIED/407 3249 GET 
 http://www.presidencia.gob.mx/imgs/edomayor_over.gif a2 NONE/- text/html
 
 if we use percistance, it works, but we can stop using of sharing usernames.  
 Balancig schema is like this:
 
 user - balancer f5 - squid1 
  \-squid2
 
 Squid is configured with LDAP-digest auth.

digest auth needs persistent sessions to work best. Without session it
will perform quite badly with many repeated 407 exchanges.

The reason to this is that digest authentication is stateful, with the
server verifying that the client responds to a challenge sent by that
server. This is part of the replay protection agains authenticated
session theft and by design in the digest authentication scheme. Each
time the client gets connected to a new proxy server the server issued
challenge needs to be renewed.

basic authentication works well with dumb TCP load balancing, as it's
completely stateless.

NTLM/Negotiate also works with dumb TCP load balancing, as it's very
stateful but at the TCP connection level, not at the HTTP message
level..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Pseudo-random 403 Forbidden...

2008-07-02 Thread John Doe
  Looks like a false positive.
  
  Maybe you need to enable retry_on_error. Shouldn't be needed, but I
  haven't verified this in quite a while..
 
 It only changed the 403 Forbiden to constant 200 No headers, assuming 
 HTTP/0.9
 
   'The latest from a CentOS 5.2: squid-2.6.STABLE6-5.el5_1.2
   I guess I will have to manualy compile a STABLE20...  Or is 2.7STABLE3 
 better 
  and as stable?
  
  I would recomment 2.7.
  
  If you recompile 2.6 then make sure to get 2.6.STABLE21, or you'll miss
  some annoying bugfixes..
 
 I compiled 2.7STABLE3 and no more problems...
 Now I will just have to parse my confs for 2.6-2.7 changes.

Hum, I rejoiced too quickly...
Same problem with 2.7STABLE3

Stop squids, rm -Rf cache_dirs, start squids and get objects on random squid:

Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/index.html  = 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/spain.gif   = 200 OK
Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/img/greece.gif  = 200 OK
Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/img/france.gif  = 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/denmark.gif = 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/sweden.gif  = 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/finland.gif = 403 
Forbidden
Squid3 ( 192.168.17.13 ) - GET http://127.0.0.1/img/japan.gif   = 200 OK
Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/img/usa.gif = 200 OK
Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/img/russia.gif  = 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/brasil.gif  = 200 OK
Squid3 ( 192.168.17.13 ) - GET http://127.0.0.1/img/portugal.gif= 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/polska.gif  = 200 OK
Squid3 ( 192.168.17.13 ) - GET http://127.0.0.1/img/netherlands.gif = 200 OK
Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/img/taiwan.gif  = 403 
Forbidden
Squid3 ( 192.168.17.13 ) - GET http://127.0.0.1/img/china.gif   = 200 OK
Squid1 ( 192.168.17.11 ) - GET http://127.0.0.1/img/italia.gif  = 403 Bad 
Request
Squid2 ( 192.168.17.12 ) - GET http://127.0.0.1/img/turkey.gif  = 200 OK

Always the same objects, failing on any of the 3 squids...
And, from times to times, 403 Forbidden becomes 403 Bad Request (Most of 
the time if I put retry_on_error on).

By example, when squid1 is asked for http://127.0.0.1/img/finland.gif (nobody 
has finland.gif since I deleted the cache).

squid1:
1215006500.805  0 192.168.17.11 TCP_DENIED/403 1297 GET 
http://127.0.0.1/img/finland.gif - NONE/- text/html
1215006500.805  1 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
1215006500.806  3 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
1215006500.806  4 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
1215006500.806  5 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
.. . .

squid2:
1215006500.805  0 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
1215006500.805  2 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
1215006500.806  3 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
1215006500.806  5 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
.. . .

squid3:
Nothing

Idem if I ask squid2 (but the TCP_DENIED is by squid2).

But if I ask squid3:

squid3:
1215006804.230609 192.168.17.13 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html

squid1:
1215006803.990  0 192.168.17.11 TCP_DENIED/403 1297 GET 
http://127.0.0.1/img/finland.gif - NONE/- text/html
1215006803.990  1 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
1215006803.991  3 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
1215006803.991  4 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
1215006803.992  6 192.168.17.11 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid2 text/html
.. . .

squid2:
1215006803.990  1 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
1215006803.990  2 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
1215006803.991  3 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - CD_SIBLING_HIT/squid1 text/html
1215006803.991  5 192.168.17.12 TCP_MISS/403 1297 GET 
http://127.0.0.1/img/finland.gif - 

Re: [squid-users] Pseudo-random 403 Forbidden...

2008-07-02 Thread John Doe
   Looks like a false positive.

For info, if I remove the digests, everything works fine...


  



Re: [squid-users] Pseudo-random 403 Forbidden...

2008-07-02 Thread Henrik Nordstrom
On ons, 2008-07-02 at 08:21 -0700, John Doe wrote:
Looks like a false positive.
 
 For info, if I remove the digests, everything works fine...

Cache digests has a higher false positives rate than ICP, but it can
happen with ICP as well.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Squid 2.6 not caching downloaded files

2008-07-02 Thread Tony Da Silva

Hi,

I have noticed that my squid is only caching HTML files and not downloaded
files.

in my squid.conf I have maximum_object_size 250 MB, but whenever I do test
downloads the file is downloaded from the source.

When the initial download completes, store.log writes RELEASE and the URL of
the file. access.log indicates TCP_MISS when I try subsequent downloads.

my squid.conf is as follows:

http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
access_log /var/log/squid/access.log squid hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563  # https, snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
**acl lan removed**
http_access allow localhost
http_access allow lan
http_access deny all
http_reply_access allow all
icp_access allow all
**hostname removed***
always_direct allow all
coredump_dir /var/spool/squid
cache_dir ufs /var/spool/squid 1 16 256 reference_age 01 month
maximum_object_size 250 MB

Can somebody please check if I am missing something? 





Re: [squid-users] Squid3 Authentication digest ldap problema

2008-07-02 Thread Edward Ortega
Hi!
  
Henrik Nordstrom escribió:
 On tor, 2008-06-19 at 15:49 -0430, Edward Ortega wrote:
   
 Hi!

 I've a problem with authentication ldap on squid3 using digest, i'm
 using Squid Cache: Version 3.0.PRE5 on Debian ia64 :

# /usr/lib/squid3/digest_ldap_auth -v 3 -b 'dc=something,dc=com' -F
 '((objectclass=posixAccount)(uid=%s))' -H 'ldap://ldap' -A 
 'userPassword' -l  -e -d
 someuser somepassword
 ERR
  
 Any help would be appreciated, thanks!
 

 Digest helpers expect a different input.

 username:realmenter
 (with the quotes)

 Additionally userPassword is usually write-only in most LDAP trees for
 security reasons, and practically never contains a Digest H(A1) hash (-e
 option).

 The job of a digest helper is to return the Digest H(A1) hash for a
 given username + realm combination. This can be based on either
 plaintext passwords or precalculated digest H(A1) hashes stored in the
 backend..

 H(A1) is MD5(username + : + realm + : + password)

   
   Ok, i store on the '*street*' attribute something like you said (
MD5(username + : + realm + : + password) ), have i to  store the 
realm  argument  on  other  attribute  to squid  understand the hash?

#/usr/lib/squid3/digest_ldap_auth -v 3 -b 'dc=something,dc=com' -F
'((objectclass=posixAccount)(uid=%s))' -H 'ldap://ldap' -A '*street*' 
-l -d

 Regards
 Henrik
   
Thanks agains


[squid-users] 'squid -z' going wrong...

2008-07-02 Thread Kurt Buff
I am installing squid on a new (actually old, but it's newly installed
with FreeBSD) box. I'm trying to do squid -z to initialize cache, and
it's not working as I expect from reading squid.conf.default.

The error:

zsquid2# /usr/local/sbin/squid -z
FATAL: Bungled squid.conf line 6: cache_dir aufs /squid 54476 512 1024
Squid Cache (Version 3.0.STABLE6): Terminated abnormally.
CPU Usage: 0.021 seconds = 0.021 user + 0.000 sys
Maximum Resident Size: 3808 KB
Page faults with physical i/o: 0

/squid is a RAID0 volume, and it's 133gbytes in size, so I'm using
less than half of it.


What am I doing wrong? I've tried various sizes for the Megabytes, L1
and L2 parameters, and nothing seems to work.

Kurt


Re: [squid-users] Squid3 Authentication digest ldap problema

2008-07-02 Thread Henrik Nordstrom
On ons, 2008-07-02 at 14:52 -0430, Edward Ortega wrote:

 Ok, i store on the '*street*' attribute something like you said (
 MD5(username + : + realm + : + password) ), have i to  store the 
 realm  argument  on  other  attribute  to squid  understand the hash?
 
 #/usr/lib/squid3/digest_ldap_auth -v 3 -b 'dc=something,dc=com' -F
 '((objectclass=posixAccount)(uid=%s))' -H 'ldap://ldap' -A '*street*' 
 -l -d

digest_ldap_auth expects an attribute with either

a) plain-text password

or when usingthe -e command line option

b) realm:hash

If encrypted mode is used (realm:hash) then the attribute may be
multi-valued with one value per supported realm.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Pseudo-random 403 Forbidden...

2008-07-02 Thread Henrik Nordstrom
On ons, 2008-07-02 at 18:49 +0200, Henrik Nordstrom wrote:
 On ons, 2008-07-02 at 08:21 -0700, John Doe wrote:
 Looks like a false positive.
  
  For info, if I remove the digests, everything works fine...
 
 Cache digests has a higher false positives rate than ICP, but it can
 happen with ICP as well.

In other words please file a bug report at http://bugs.squid-cache.org/

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] 'squid -z' going wrong...

2008-07-02 Thread Henrik Nordstrom
On ons, 2008-07-02 at 13:20 -0700, Kurt Buff wrote:
 I am installing squid on a new (actually old, but it's newly installed
 with FreeBSD) box. I'm trying to do squid -z to initialize cache, and
 it's not working as I expect from reading squid.conf.default.
 
 The error:
 
 zsquid2# /usr/local/sbin/squid -z
 FATAL: Bungled squid.conf line 6: cache_dir aufs /squid 54476 512 1024

Is the aufs cache_dir type enabled in your squid binary?

What is the output of /usr/local/sbin/squid -v?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Squid performance for 600 users

2008-07-02 Thread Carlos Alberto Bernat Orozco
Hi group

I wonder if a debian box with 1Gb RAM could run squid to block child
porn sites for 600 users aprox.

Is possible? would be good? a checklist to know the requisites for squid?

Thanks in advanced


Re: [squid-users] 'squid -z' going wrong...

2008-07-02 Thread Ralf Hildebrandt
* Kurt Buff [EMAIL PROTECTED]:
 I am installing squid on a new (actually old, but it's newly installed
 with FreeBSD) box. I'm trying to do squid -z to initialize cache, and
 it's not working as I expect from reading squid.conf.default.
 
 The error:
 
 zsquid2# /usr/local/sbin/squid -z
 FATAL: Bungled squid.conf line 6: cache_dir aufs /squid 54476 512 1024
 Squid Cache (Version 3.0.STABLE6): Terminated abnormally.
 CPU Usage: 0.021 seconds = 0.021 user + 0.000 sys
 Maximum Resident Size: 3808 KB
 Page faults with physical i/o: 0
 
 /squid is a RAID0 volume, and it's 133gbytes in size, so I'm using
 less than half of it.
 
 
 What am I doing wrong? I've tried various sizes for the Megabytes, L1
 and L2 parameters, and nothing seems to work.

Did you build with the --enable-storeio configure option?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


Re: [squid-users] 'squid -z' going wrong...

2008-07-02 Thread Kurt Buff
On Wed, Jul 2, 2008 at 1:49 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On ons, 2008-07-02 at 13:20 -0700, Kurt Buff wrote:
 I am installing squid on a new (actually old, but it's newly installed
 with FreeBSD) box. I'm trying to do squid -z to initialize cache, and
 it's not working as I expect from reading squid.conf.default.

 The error:

 zsquid2# /usr/local/sbin/squid -z
 FATAL: Bungled squid.conf line 6: cache_dir aufs /squid 54476 512 1024

 Is the aufs cache_dir type enabled in your squid binary?

 What is the output of /usr/local/sbin/squid -v?

 Regards
 Henrik

How did I miss that? 'make config' reveals that indeed, no, aufs
wasn't selected for the port.

Sigh. I'm remaking now.

Thanks,

Kurt


[squid-users] Re: Is lame servers an issue?

2008-07-02 Thread Carlos Alberto Bernat Orozco
Hi

great explanation!!

Thanks!!

2008/7/2 Michael Coumerilh [EMAIL PROTECTED]:
 lame servers are sorta like the mail that you receive at your new home
 that are addressed to the previous home owner. Someone somewhere out there
 thinks that your address is the best way to get ahold of that person. When
 in reality, you have no way to do such a thing. And in this case, some
 computer (theirs specifically, or one in between you and them) is asking
 yours for directions. Your computer is logging that, Hey, someone lame (pun
 intended) just asked me about 1.2.3.4 thinking I was in charge of it. I just
 thought you'd like to know

 Mike

 Warn those who are idle, encourage the timid, help the weak, be patient
 with everyone. - Paul - (1 Thessalonians 5:14)

 On Jul 2, 2008, at 3:49 PM, Carlos Alberto Bernat Orozco wrote:

 Hi group

 Thanks for the fast answer.

 Yes, I put those lines and stop logging lame-servers.

 But I have another question. Is this process decreasing performance on
 my server? yes I stopped the logging but is totally necesary the lame
 servers?

 Thanks in advanced!



 2008/7/2 Jeff Reasoner [EMAIL PROTECTED]:

 Not an issue you can fix directly. Those domains are delegated to hosts
 that are not properly configured nameservers, and named is logging that.

 You could prevent them from being logged in bind 9.x.x with a logging
 statement in named.conf:


 logging {

  channel null { null; };

  category lame-servers { null; };
 }

 See the BIND ARM for all the gory logging details.

 On Wed, 2008-07-02 at 10:42 -0500, Carlos Alberto Bernat Orozco wrote:

 Hi group

 Sorry for my question question. I'm newbie.

 In my logs I see a lot of this messages

 named[3028]: lame server resolving 'adon.co.kr' (in 'adon.co.kr'?):
 211.234.93.201#53
 named[3028]: lame server resolving 'core-distributions.com' (in
 'core-distributions.com'?): 80.85.91.14#53
 named[3028]: lame server resolving 'core-distributions.com' (in
 'core-distributions.com'?): 80.85.91.15#53
 named[3028]: lame server resolving 'eslcontractorshardware.com' (in
 'eslcontractorshardware.com'?): 75.127.110.43#53
 named[3028]: lame server resolving 'eslcontractorshardware.com' (in
 'eslcontractorshardware.com'?): 75.127.110.2#53
 named[3028]: lame server resolving 'web-display.com' (in
 'web-display.com'?): 67.15.81.33#53
 named[3028]: lame server resolving 'web-display.com' (in
 'web-display.com'?): 67.15.80.3#53
 named[3028]: lame server resolving 'core-distributions.com' (in
 'core-distributions.com'?): 80.85.91.15#53

 Is this an issue? can I block it? how can I block it?

 Thanks in advanced

 --
 Jeff Reasoner
 HCCA
 513 728-7902 voice






RE: [squid-users] Squid performance for 600 users

2008-07-02 Thread Jonathan Chretien

http://www.deckle.co.za/squid-users-guide/Installing_Squid

On my side, I run Squid on a HP VL420 P4 1.8 or 2.1ghz, 768meg of ram, 20gig 
Seagate IDE Hard Drive for approximatly 125 users.

The maximum load, that I got, is .70 (1 minute). My CPU peak sometimes at 40% 
but most of the time, it's running most of the time at idle. I have a peak time 
at lunch time and break. My average CPU is approximatly at 5% and peak at 40%. 
I saw a 50-60% and a load at .90 at the beginning when I had some problem with 
my Apple computer and the NTLM Auth. (I still had this problem but I did a 
bypass for my apple computer)

If I compare with my current setup, for an another 150 users, probably 1-1.5 
gig of Ram is important and probably changing my Seagate IDE HardDrive for a 
SCSI 10k-15k rpm U160 or U320.

Jonathan




 Date: Wed, 2 Jul 2008 15:52:16 -0500
 From: [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid performance for 600 users

 Hi group

 I wonder if a debian box with 1Gb RAM could run squid to block child
 porn sites for 600 users aprox.

 Is possible? would be good? a checklist to know the requisites for squid?

 Thanks in advanced

_
Envoie un sourire, fais rire, amuse-toi! Employez-le maintenant!
http://www.emoticonesgratuites.ca/?icid=EMFRCA120

Re: [squid-users] 'squid -z' going wrong...

2008-07-02 Thread Kurt Buff
On Wed, Jul 2, 2008 at 1:49 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On ons, 2008-07-02 at 13:20 -0700, Kurt Buff wrote:
 I am installing squid on a new (actually old, but it's newly installed
 with FreeBSD) box. I'm trying to do squid -z to initialize cache, and
 it's not working as I expect from reading squid.conf.default.

 The error:

 zsquid2# /usr/local/sbin/squid -z
 FATAL: Bungled squid.conf line 6: cache_dir aufs /squid 54476 512 1024

 Is the aufs cache_dir type enabled in your squid binary?

 What is the output of /usr/local/sbin/squid -v?

 Regards
 Henrik

Building directories now.

Thanks again.

Kurt


[squid-users] Squid performance... RAM or CPU?

2008-07-02 Thread Carlos Alberto Bernat Orozco
Hi group

Thanks for the answer! I like those cases like yours. Any other cases?

Another question. What is the most important component for squid? RAM or CPU?

Why I'm making this question, because when I installed squid for 120
users, the ram went to the sky

But your're mentioning more CPU than RAM as a primary component to look at

Could you explain this?

Thanks in advanced

2008/7/2 Jonathan Chretien [EMAIL PROTECTED]:

 http://www.deckle.co.za/squid-users-guide/Installing_Squid

 On my side, I run Squid on a HP VL420 P4 1.8 or 2.1ghz, 768meg of ram, 20gig 
 Seagate IDE Hard Drive for approximatly 125 users.

 The maximum load, that I got, is .70 (1 minute). My CPU peak sometimes at 40% 
 but most of the time, it's running most of the time at idle. I have a peak 
 time at lunch time and break. My average CPU is approximatly at 5% and peak 
 at 40%. I saw a 50-60% and a load at .90 at the beginning when I had some 
 problem with my Apple computer and the NTLM Auth. (I still had this problem 
 but I did a bypass for my apple computer)

 If I compare with my current setup, for an another 150 users, probably 1-1.5 
 gig of Ram is important and probably changing my Seagate IDE HardDrive for a 
 SCSI 10k-15k rpm U160 or U320.

 Jonathan




 Date: Wed, 2 Jul 2008 15:52:16 -0500
 From: [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid performance for 600 users

 Hi group

 I wonder if a debian box with 1Gb RAM could run squid to block child
 porn sites for 600 users aprox.

 Is possible? would be good? a checklist to know the requisites for squid?

 Thanks in advanced

 _
 Envoie un sourire, fais rire, amuse-toi! Employez-le maintenant!
 http://www.emoticonesgratuites.ca/?icid=EMFRCA120


Re: [squid-users] Squid 2.6 not caching downloaded files

2008-07-02 Thread Adrian Chadd
Look at enabling the header logging option (mime_ something in
squid.conf) and see what headers the object is being sent with.

It may be sent with headers which deny caching..



Adrian


2008/7/3 Tony Da Silva [EMAIL PROTECTED]:
 Hi,

 I have noticed that my squid is only caching HTML files and not downloaded
 files.

 in my squid.conf I have maximum_object_size 250 MB, but whenever I do test
 downloads the file is downloaded from the source.

 When the initial download completes, store.log writes RELEASE and the URL of
 the file. access.log indicates TCP_MISS when I try subsequent downloads.

 my squid.conf is as follows:

 http_port 3128 transparent
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 access_log /var/log/squid/access.log squid hosts_file /etc/hosts
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443 563  # https, snews
 acl SSL_ports port 873  # rsync
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 563 # https, snews
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl Safe_ports port 631 # cups
 acl Safe_ports port 873 # rsync
 acl Safe_ports port 901 # SWAT
 acl purge method PURGE
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access allow purge localhost
 http_access deny purge
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 **acl lan removed**
 http_access allow localhost
 http_access allow lan
 http_access deny all
 http_reply_access allow all
 icp_access allow all
 **hostname removed***
 always_direct allow all
 coredump_dir /var/spool/squid
 cache_dir ufs /var/spool/squid 1 16 256 reference_age 01 month
 maximum_object_size 250 MB

 Can somebody please check if I am missing something?




Re: [squid-users] Squid + F5 balancing doesnt work!!!

2008-07-02 Thread Luis Daniel Lucio Quiroz
Hanks Henrik

That it is!  I did not realiza of stateful requirent of digest auth.  We 
change a little archiecture of squid and then work. F5 is now using somekind 
of configuration to have a active-pasive schema.

Regards,

LD

On Wednesday 02 July 2008 06:06:34 Henrik Nordstrom wrote:
 On tis, 2008-07-01 at 20:25 -0500, Luis Daniel Lucio Quiroz wrote:
  1214974554.906  0 99.90.40.253 TCP_DENIED/407 3249 GET
  http://www.presidencia.gob.mx/imgs/edomayor_over.gif a2 NONE/- text/html
 
  if we use percistance, it works, but we can stop using of sharing
  usernames. Balancig schema is like this:
 
  user - balancer f5 - squid1
   \-squid2
 
  Squid is configured with LDAP-digest auth.

 digest auth needs persistent sessions to work best. Without session it
 will perform quite badly with many repeated 407 exchanges.

 The reason to this is that digest authentication is stateful, with the
 server verifying that the client responds to a challenge sent by that
 server. This is part of the replay protection agains authenticated
 session theft and by design in the digest authentication scheme. Each
 time the client gets connected to a new proxy server the server issued
 challenge needs to be renewed.

 basic authentication works well with dumb TCP load balancing, as it's
 completely stateless.

 NTLM/Negotiate also works with dumb TCP load balancing, as it's very
 stateful but at the TCP connection level, not at the HTTP message
 level..

 Regards
 Henrik




[squid-users] Could squid change src ip to client ipaddress in the Transparent mode?

2008-07-02 Thread S.KOBAYASHI
Hi guys,
I want squid or Linux to change to http_client ip address from squid:eth1 ip
address which is in processing of the proxy in transparent mode.
Eventually, nobody needs to change the firewall settings.

Http_client --- squid:eth0:80 --- squid:eth1:any  F/W --- Origin
server.

I have never run transparent before, so I have checked normal squid mode.
The packets which is fired from squid:eth1 are being changed to eth1's ip
address, and orgin server is left foot prints of squid:eth1's ip address in
accessLog as Http_client ip address. I think it's a pretty normal process.
However in transparent mode, I want to leave Http_client ip address in
accessLog.
How can I do that?

Best regards,
Seiji Kobayashi





[squid-users] Account expiry

2008-07-02 Thread Patrick G. Victoriano
Hi Gurus,

I want to give access a certain user to the internet at a certain date. What 
config should I enter to my conf to implement this
setup

Example.

User IP :   192.168.1.1
Date to give access the internet:   July 3, 2008 to July 10, 2008


Please help. 
TIA
 
 
 
Regards,
 
 
 TRIK






[squid-users] objects after expire time

2008-07-02 Thread Ken W.
Hello,

My original server includes the expire headers in its response.
When an object cached on squid get expired, for the succedent requests
to this object, does squid revalidate it to original server every
time? If so, does this bring down squid's performance?

Thanks for any helps.


[squid-users] Squid web interface

2008-07-02 Thread Patrick G. Victoriano

Hi,

Is there a software or program where you can set acl's on squid using a web 
browser?
If there's any, please advise me.

Thank you
 
 
 
Regards,
 
 
 (TRIK)




[squid-users] Recommend for hardware configurations

2008-07-02 Thread Roy M.
Currently I have a testing server running as squid as reverse proxy to
a server web servers, the sites contains a lot of images uploaded by
user storing in NFS and hence now being accelerarted by squid:

The current config of squid is as below:

Intel E5310 Quad Core 1.6Ghz PU x 2
6GB RAM (4GB assigned to squid)
400GB SAS 15K Hard Disk (Raid5, 300GB assigned to squid)

It is running Squid 3 Stable7 for around 2 weeks, contains the following stats


Avg HTTP reqs per minute since start:   3926.2
Cache Hit: 5min :   82.3%, 60min: 
79.4%
Memory hits as % of hit :   5min: 40.9%, 60min: 47.5%
Mean Object Size:   49.02 KB
CPU Usage   :   5min: 
4.91%, 60min: 4.17%

From the MRTG report, the bandwidth throughtput is :

Avg In (4M) , Max In (6M)
Avg Out (10M)   , Max In (19M)

Now the memory is 100% used and disk has only around 10% used, for
long term I would expect all the disk cache being used also given our
data size.

From the CPU usage as you can see, the server seems too powerful for
our need, I guess even if I sarurated 100M bandwidth, the server is
able to handle it very easily.

We are planning to replace this testing server with two or three
cheaper 1U servers (sort of redundancy!)

Intel Dual Core or Quad Core CPU x1 (no SMP)
4GB DDR2 800 RAM
500GB or 750GB SATA (Raid 0)

Any comments?


[squid-users] Access to IP websites blocked partially

2008-07-02 Thread Josh
Hey list,

I have an issue with my squid proxy server.
My setup is like that : client --- squid --- netcache --- internet

When I enter in my client's browser the url: http://17.149.160.10/ , I
got stucked... the page cannot be displayed.
Access.log gives me :
1215060561.991   4986 10.51.128.79 TCP_MISS/000 0 GET
http://17.149.160.10/ - NONE/- -

When I enter the same url in the browser on the proxy itself, the page
is displayed without any pblm. The browser is configured to use the
netcache proxy server.

Any idea on what's going on ?

Thanks,
Josh

Non-authoritative answer:
www.apple.com   canonical name = www.apple.com.akadns.net.
Name:   www.apple.com.akadns.net
Address: 17.149.160.10

# squid -v
Squid Cache: Version 2.6.STABLE19
configure options:  '--datadir=/usr/local/share/squid'
'--localstatedir=/var/squid' '--disable-linux-netfilter'
'--disable-linux-tproxy' '--disable-epoll' '--enable-arp-acl'
'--enable-async-io' '--enable-auth=basic digest ntlm'
'--enable-basic-auth-helpers=NCSA YP'
'--enable-digest-auth-helpers=password' '--enable-cache-digests'
'--enable-large-cache-files' '--enable-carp' '--enable-delay-pools'
'--enable-external-acl-helpers=ip_user session unix_group
wbinfo_group' '--enable-htcp' '--enable-ntlm-auth-helpers=SMB'
'--enable-referer-log' '--enable-removal-policies=lru heap'
'--enable-snmp' '--enable-ssl' '--enable-storeio=ufs aufs coss diskd
null' '--enable-underscores' '--enable-useragent-log'
'--enable-wccpv2' '--with-aio' '--with-large-files' '--with-pthreads'
'--with-maxfd=32768' 'CPPFLAGS=-I/usr/local/include'
'LDFLAGS=-L/usr/local/lib' 'CFLAGS=-DNUMTHREADS=128'
'--prefix=/usr/local' '--sysconfdir=/etc' '--mandir=/usr/local/man'
'--infodir=/usr/local/info' 'CC=cc'

# cat squid.conf
http_port 8080
icp_port 0
cache_peer 10.22.52.1 parent 8080 0 default no-query no-digest no-netdb-exchange
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 512 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 64 MB
maximum_object_size_in_memory 512 KB
ipcache_size 8192
ipcache_low 90
ipcache_high 95
fqdncache_size 8192
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /var/squid/cache 6 16 256
access_log /var/squid/logs/access.log squid
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
positive_dns_ttl 24 hours
half_closed_clients off
pconn_timeout 10 seconds
shutdown_lifetime 5 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl purge method PURGE
acl CONNECT method CONNECT
acl snmppublic snmp_community public
acl corpnet dstdomain .corp.local
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access allow CONNECT SSL_ports
http_access allow Safe_ports
http_access deny all
httpd_suppress_version_string on
visible_hostname 
memory_pools off
log_icp_queries off
client_db off
buffered_logs on
never_direct deny corpnet
never_direct allow all


[squid-users] Reverse Proxy, OWA RPCoHTTPS and NTLM

2008-07-02 Thread Abdessamad BARAKAT

Hi,

I try to setup squid as ssl reverse proxy for publishing OWA services 
(webmail, rpc/http and activesync), now the publish is made by a ISA 
server and I want to replace this ISA Server.


the flow:

Internet = Firewall(NAT) = Squid Reverse Proxy on DMZ( https port 
8443) = Firewall(8443 open) = Exchange Server (NLB IP on https port 443)


I can get webmail working well, not yet tested activesync but the use of 
RPC over HTTP doesn't work, I get a 401 error code when I try to logon 
with outlook :


squid access log:

1215017068.440253 193.251.14.120 TCP_MISS/401 482 RPC_IN_DATA 
https://webmail.company.com:8443/rpc/rpcproxy.dll?exchange:6001 - 
FIRST_UP_PARENT/exchangeServer text/html
1215017080.291 96 193.251.14.120 TCP_MISS/401 482 RPC_IN_DATA 
https://webmail.company.com:8443/rpc/rpcproxy.dll?exchange:6001 - 
FIRST_UP_PARENT/exchangeServer text/html
1215017080.537 85 193.251.14.120 TCP_MISS/401 482 RPC_OUT_DATA 
https://webmail.company.com:8443/rpc/rpcproxy.dll?exchange:6001 - 
FIRST_UP_PARENT/exchangeServer text/html


IIS log:

2008-07-02 13:30:49 W3SVC1 172.16.18.136 RPC_OUT_DATA /rpc/rpcproxy.dll 
exchange:6001 443 - 172.16.18.128 MSRPC 401 1 0
2008-07-02 13:31:28 W3SVC1 172.16.18.136 RPC_IN_DATA /rpc/rpcproxy.dll 
exchange:6001 443 - 172.16.18.128 MSRPC 401 1 0
2008-07-02 13:31:34 W3SVC1 172.16.18.136 RPC_OUT_DATA /rpc/rpcproxy.dll 
exchange:6001 443 - 172.16.18.128 MSRPC 401 1 0


The IIS RPC service is configured to use Windows Integrated
Authentication so I think maybe I need to setup some NTLM auth settings 
for fix this problem. The GC and DC are on the same LAN of the exchange 
server, no firewall issues with rpc ports(6001, 6002 and 6004).


I have tried with the versions 3.0STABLE7 ans 2.7STABLE3.

If someone has some ideas and solutions for resolve this issue.

Thanks a lot


squid.conf:

# Define the required extension methods
extension_methods RPC_IN_DATA RPC_OUT_DATA

# Publish the RPCoHTTP service via SSL
https_port squid_ip:8443

cert=/etc/apache2/ssl/cert.pem defaultsite=webmail.toto.com

cache_peer exchange_ip parent 443 0 no-query originserver 
front-end-https=auto ssl sslflags=DONT_VERIFY_PEER name=exchangeServer


acl EXCH dstdomain .toto.com
acl all src 0.0.0.0/0.0.0.0
no_cache deny all

#no local caching
maximum_object_size 0 KB
minimum_object_size 0 KB
access_log /usr/local/squid/var/logs/access.log squid

cache_peer_access exchangeServer allow EXCH
cache_peer_access exchangeServer deny all
never_direct allow EXCH

# Lock down access to just the Exchange Server!
http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all