Re: [squid-users] squid_ldap_group with users in several OUs

2004-12-02 Thread Kelly_Connor




Hi oliver-

Try adjusting your squid_ldap_group query just after -b
cn=Users,dc=domain,dc=local to include -s sub to search all
subcontainers.

Let me fire a question at you-

I am trying to use squid_ldap_group to query Novell eDirectory via LDAP for
multiple group memberships.

I am fuzzy on how the search filter is used, and I see in your filter that
you use variables other than %s that was referred to in some material I
read.

What is %g, and what is %u?  What is the difference between little f
and big F in your search filter?  I can find no documentation on big F.

I think this is the key I need to understand squid_ldap_group

Kelly Connor
Network Technician
Gilbert Unified School District
[EMAIL PROTECTED]


   
 Oliver Hookins
 [EMAIL PROTECTED] 
 ce.com.au To 
   squid-users 
 12/01/2004 08:46  [EMAIL PROTECTED]   
 PM cc 
   
   Subject 
   [squid-users] squid_ldap_group with 
   users in several OUs
   
   
   
   
   
   




OK this is my last question about this I swear... but I really need to
know the answer to this one.

I've just found out that where I'll be implementing the squid_ldap_group
authorisation has several OUs for containing the user accounts on the
2000 AD. At the moment my command line for the squid_ldap_group is as
follows:

external_acl_type ldap_group ttl=120 negative_ttl=120 %LOGIN
/usr/lib/squid/squid_ldap_group -b cn=Users,dc=domain,dc=local -f
((cn=%g)(member=%u)(objectClass=group)) -B
cn=Users,dc=domain,dc=local -F samaccountname=%s -D
cn=Oliver,cn=Users,dc=domain,dc=local -w password -S 192.168.150.100

This obviously just looks in the Users container for groups and users
and any subtrees. I tried shortening the Base DN for both users and
groups to just dc=domain,dc=local but it doesn't appear to work, I
suspect because of the filters or something. How can I specify a base DN
  and filter when the users may be in one of any number of OUs? (even
OUs nested within others)

Thanks in advance,
Oliver


---
Oliver Hookins
B.Sc(Computing and Information Systems)
Exhibition IT Services Pty Ltd
e: [EMAIL PROTECTED]
p: +61 2 9882 1300
f: +61 2 9882 3377


This communication is intended only for the person or entity to which it is
addressed and may contain confidential and/or privileged material.  Any
review, retransmission, dissemination or other use of, or taking any action
in reliance on, this communication by persons or entities other than the
intended recipient is prohibited. Exhibition IT Services Pty LTD makes no
express or implied representation or warranty that this electronic
communication or any attachment is free from computer viruses or other
defects or conditions which could damage or interfere with the recipients
data, hardware or software.  This communication and any attachment may have
been modified or otherwise interfered with in the course of transmission.





[squid-users] ntlm_auth - e-mail address insted of w2k user

2004-12-02 Thread bobr
hi,
  i have this problem with squid/2.5.STABLE6, samba Version 3.0.7 running
(ntlm_auth) --helper-protocol=squid-2.5-ntlmssp.

I have:
- internet domain:   @mycompany.com
- W2K domain is: MYW2KDOMAIN
- W2K user:  average
- user real name:Joe Average  
- user's workstation: AVERAGEWS

And I have in logs (cache.log smbloglevel=3):

1) [2004/11/23 15:03:24, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(615)
 Got user=[average] domain=[MYW2KDOMAIN] workstation=[AVERAGEWS] len1=24 
len2=24
.
2) [2004/11/23 15:03:46, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(615)
  Got [EMAIL PROTECTED] domain=[] workstation=[AVERAGEWS] len1=24 len2=24
   [2004/11/23 15:03:46, 3] utils/ntlm_auth.c:winbind_pw_check(439)
  Login for user [EMAIL PROTECTED]@[AVERAGEWS] failed due to [No such user]

Enerything was ok at point 1), but since point 2) (24 secionds later) my user
cannot be authenticated via ntlm.

Does anyone seen this? - got user is set to user's e-mail address acctualy.
How can I get rid of this or what's wrong?

Thank you in advance.

Marian.


Re[4]: [squid-users] 'Squid -k reconfigure' changes ownership of the swap.state file

2004-12-02 Thread Jafar Aliev
Good day.

EM  Check with :
 
EM  % squid -k parse

EM  further, to see that there are no other errors in squid.conf.
EM  Watchout for any errors  or warnings too in cache.log when a cold start of 
squid
EM  is performed.

Squid -k parse does not produce any errors or warings.
There are no errors in cache.log file after cold start.
(except strange
chdir: /var/squid/cache: (2) No such file or directory
but I didn't find any occurrence of /var/squid/cache in config file)

---[cache.log 
begin]
2004/11/12 18:26:09| Starting Squid Cache version 2.5.STABLE7 for 
i686-pc-linux-gnu...
2004/11/12 18:26:09| Process ID 18046
2004/11/12 18:26:09| With 1024 file descriptors available
2004/11/12 18:26:09| Performing DNS Tests...
2004/11/12 18:26:09| Successful DNS name lookup tests...
2004/11/12 18:26:09| DNS Socket created at 0.0.0.0, port 61016, FD 4
2004/11/12 18:26:09| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2004/11/12 18:26:09| Adding nameserver ...cuted... from /etc/resolv.conf
2004/11/12 18:26:09| Adding nameserver ...cuted... from /etc/resolv.conf
2004/11/12 18:26:09| helperOpenServers: Starting 10 'squidGuard' processes
2004/11/12 18:26:09| Unlinkd pipe opened on FD 19
2004/11/12 18:26:09| Swap maxSize 512000 KB, estimated 39384 objects
2004/11/12 18:26:09| Target number of buckets: 1969
2004/11/12 18:26:09| Using 8192 Store buckets
2004/11/12 18:26:09| Max Mem  size: 131072 KB
2004/11/12 18:26:09| Max Swap size: 512000 KB
2004/11/12 18:26:09| Store logging disabled
2004/11/12 18:26:09| Rebuilding storage in /squid.cache (CLEAN)
2004/11/12 18:26:09| Using Least Load store dir selection
2004/11/12 18:26:09| chdir: /var/squid/cache: (2) No such file or directory
2004/11/12 18:26:09| Current Directory is /etc/squid
2004/11/12 18:26:09| Loaded Icons.
2004/11/12 18:26:10| Accepting HTTP connections at 192.168.1.1, port 3128, FD 
20.
2004/11/12 18:26:10| Ready to serve requests.
2004/11/12 18:26:10| Store rebuilding is  9.4% complete
2004/11/12 18:26:10| Done reading /squid.cache swaplog (43713 entries)
2004/11/12 18:26:10| Finished rebuilding storage from disk.
2004/11/12 18:26:10| 43713 Entries scanned
2004/11/12 18:26:10| 0 Invalid entries.
2004/11/12 18:26:10| 0 With invalid flags.
2004/11/12 18:26:10| 43713 Objects loaded.
2004/11/12 18:26:10| 0 Objects expired.
2004/11/12 18:26:10| 0 Objects cancelled.
2004/11/12 18:26:10| 0 Duplicate URLs purged.
2004/11/12 18:26:10| 0 Swapfile clashes avoided.
2004/11/12 18:26:10|   Took 0.7 seconds (60160.1 objects/sec).
2004/11/12 18:26:10| Beginning Validation Procedure
2004/11/12 18:26:10|   Completed Validation Procedure
2004/11/12 18:26:10|   Validated 43713 Entries
2004/11/12 18:26:10|   store_swap_size = 460796k
2004/11/12 18:26:10| storeLateRelease: released 0 objects
---[cache.log 
end]

-- 
Best regards,
 Jafar Aliev admin at usn dot ru
 usn.ru administrator



RE: Re[4]: [squid-users] 'Squid -k reconfigure' changes ownership of the swap.state file

2004-12-02 Thread Elsen Marc

 
 Squid -k parse does not produce any errors or warings.
 There are no errors in cache.log file after cold start.
 (except strange
 chdir: /var/squid/cache: (2) No such file or directory
 but I didn't find any occurrence of /var/squid/cache in config file)
...

You should treat that message as an error and resolve it :
make sure that cache dirs defined in squid.conf exist and
are accessible by the user squid runs as.

M.


[squid-users] Unofficial root CAs with squid squid-users@squid-cache.org

2004-12-02 Thread nodata
Hi.

I'm using Squid Version 2.5.STABLE6 in this configuration:
 Internet -HTTPS- squid -HTTP- Intranet

It works *perfectly* with a self-signed certificate.

However, if I sign a certificate with my own CA certificate, created using
the -newca option to CA.pl, it doesn't work, and I get the following
error:
 FATAL: Bungled squid.conf
The error goes away when I switch back to my self-signed certificate -
only a certificate signed by my own CA certificate does not work.

To try and find out why, I set up a secure website using Apache's httpd. I
added the SSLCACertificateFile directive, and it worked perfectly. I just
had to accept the certificate.

I tried various option to get squid to accept the CA, some of them
probably made up:
 sslflags=DONT_VERIFY_PEER
 cafile=/path/to/cert
 ca=/path/to/cert

Thinking squid couldn't take an argument to a different CA file, I
appended my CA cert to the ca-bundle.crt file, making sure the format was
exactly the same as the other certs in the file, i.e. an x509 part then
the cert.

squid -k parse still complained.

What do I need to do to get this working?
(I'm not able to patch squid because of automatic updates.)

I'm running FC3.

Thanks a lot.


RE: [squid-users] delete one item from the cache

2004-12-02 Thread Elsen Marc

 
 
 I need to delete one particular address from the cache .. but 
 don't know
 how.  The problem is that when the user goes to 
 http://theaddressinquestion
 it used to redirect to 
 http://theaddressinquestion/manager.html   but now it
 is supposed to redirect to 
 http://theaddressinquesion/index.html .  This
 works from outside the proxy, but for people using the proxy 
 it still goes
 to the manager page.  Any idea's how to fix this.
 
   % squidclient -m PURGE http://theaddressinquestion

   Make sure the PURGE method is allowed in squid.conf.

   M.


RE: [squid-users] delete one item from the cache

2004-12-02 Thread Peter Marshall
I had that added ...

the stuff on the faq page (very sorry about not seeing this ... although I
did nlook)
acl PURGE method PURGE
acl localhost src 127.0.0.1
http_access allow PURGE localhost
http_access deny PURGE

when I ran squid -k reconfigure I got the results below ... so I removed the
acl localhost src 127.0.0.1.  This seemed to work ok.

2004/12/02 11:24:12| WARNING: '127.0.0.1' is a subnetwork of '127.0.0.1'
2004/12/02 11:24:12| WARNING: because of this '127.0.0.1' is ignored to keep
splay tree searching predictable
2004/12/02 11:24:12| WARNING: You should probably remove '127.0.0.1' from
the ACL named 'localhost'

However, when I ran the following, I got the error below.

./squidclient -m PURGE http://142.166.132.219/
client: ERROR: Cannot connect to localhost:3128: Connection refused

any ideas

Peter


-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:07 AM
To: Peter Marshall
Subject: RE: [squid-users] delete one item from the cache





 I need to delete one particular address from the cache .. but
 don't know
 how.  The problem is that when the user goes to
 http://theaddressinquestion
 it used to redirect to
 http://theaddressinquestion/manager.html   but now it
 is supposed to redirect to
 http://theaddressinquesion/index.html .  This
 works from outside the proxy, but for people using the proxy
 it still goes
 to the manager page.  Any idea's how to fix this.


 BTW :

   http://www.squid-cache.org/Doc/FAQ/FAQ-7.html#ss7.5

 M.



[squid-users] Question about HTTP Compress and squid

2004-12-02 Thread
HI:
  I know squid surport HTTP/1.1 now. And HTTP/1.1 surport Compressed 
encoding format to 

transfer response content. So, I want to know how squid cache the 
compressed content in 

memery and disk. Is the uncompressed retrieval or the compressed HTTP data 
block. If is the 

later, how to cache them in disk and memery.
  Another question is what format of object store in disk when squid cache 
the object. For 

example,HTML file is the original except meta-data or the encapsulated 
object in disk. And 

how store the GIF and JPEG. THANK YOU!
  Best Wish!
_
 MSN Hotmail  http://www.hotmail.com  



[squid-users] fedora core 3 ip wccp

2004-12-02 Thread Gerald Jaya
Helo all

I am trying to configure wccp between my cisco 3600 and Fedora core 3. I am
made to understand that I no longer need the ip_wccp module because the
default ip_gre module that comes with fedora core 3 can handle ip wccp  My
problem is I have failed to make ip router and the squid cache communicate

My current config

Router
ip wccp version 1
ip wccp web-cache redirect-list proxy




Fedora core 3
insmod ip_gre
iptunnel add gre1 mode gre remote 196.xxx.xxx.xxx local 196.xxx.xxx.xxx dev
eth0
ifconfig gre1 127.0.0.2 up


squid

wccp_router
httpd_accel_host
httpd_accel_port
httpd_accel_with_proxy
httpd_accel_uses_host_header


thanks in advance
gerry


-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.289 / Virus Database: 265.4.4 - Release Date: Nov-30-04
 



[squid-users] fedora core 3 ip wccp

2004-12-02 Thread Gerald Jaya
Helo all

I am trying to configure wccp between my cisco 3600 and Fedora core 3. I am
made to understand that I no longer need the ip_wccp module because the
default ip_gre module that comes with fedora core 3 can handle ip wccp  My
problem is I have failed to make ip router and the squid cache communicate

My current config

Router
ip wccp version 1
ip wccp web-cache redirect-list proxy




Fedora core 3
insmod ip_gre
iptunnel add gre1 mode gre remote 196.xxx.xxx.xxx local 196.xxx.xxx.xxx dev
eth0
ifconfig gre1 127.0.0.2 up


squid

wccp_router
httpd_accel_host
httpd_accel_port
httpd_accel_with_proxy
httpd_accel_uses_host_header


thanks in advance
gerry


-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.289 / Virus Database: 265.4.4 - Release Date: Nov-30-04
 



Re: [squid-users] 'Squid -k reconfigure' changes ownership of the swap.state file

2004-12-02 Thread Jafar Aliev
tood day.
 
 (except strange
 chdir: /var/squid/cache: (2) No such file or directory
 but I didn't find any occurrence of /var/squid/cache in config file)


EM You should treat that message as an error and resolve it :
EM make sure that cache dirs defined in squid.conf exist and
EM are accessible by the user squid runs as.

I remake this directory (/var/squid/cache) with squid user
rights (squid.squid). Error in cache.log disappear, but the problem
still exists. 'Squid -k reconfigure' does 'Segment Violation...dying'
with squid.

...continued...

After I remount filesystem with squid cache to default position
(/var/squid/cache) all have been worked well. In conclusion, there are
a bug in the squid code that does not allow cache be positioned on
other than default location.

Do I have to report this to developers? :-)

-- 
Best regards and excuses for my English,
 Jafar Aliev admin at usn dot ru
 usn.ru administrator



RE: Re[4]: [squid-users] 'Squid -k reconfigure' changes ownership of the swap.state file

2004-12-02 Thread Chris Robertson
Check the startup script, and see what squid.conf it uses.  Squid looks for
a conf file in a default location specified when it is compiled, but can be
told (with the -f option) to use a different config file.  Running squid
-k reconfigure without the -f option will reconfigure squid to use the
config file in the default location, which could explain the problems you
are seeing.

Just a thought.

Chris

-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 5:45 AM
To: Jafar Aliev; [EMAIL PROTECTED]
Subject: RE: Re[4]: [squid-users] 'Squid -k reconfigure' changes
ownership of the swap.state file



 
 Squid -k parse does not produce any errors or warings.
 There are no errors in cache.log file after cold start.
 (except strange
 chdir: /var/squid/cache: (2) No such file or directory
 but I didn't find any occurrence of /var/squid/cache in config file)
...

You should treat that message as an error and resolve it :
make sure that cache dirs defined in squid.conf exist and
are accessible by the user squid runs as.

M.


RE: [squid-users] squid to resolve local domain with out seeing / etc/hosts

2004-12-02 Thread Chris Robertson
In your squid.conf add a line like:

hosts_file none

Chris

-Original Message-
From: Mrs. Geeta Thanu [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 1:35 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] squid to resolve local domain with out seeing
/etc/hosts


Hi all,

I have installed squid and is working fine for many days.
Recently we shifted our web server to a IP say 172.16.2.80.

but the system in which squid is running has in its /etc/hosts file

172.16.0.1 webserver

 I want this entry not to be changed becos of some
other purpose.

so now if anybody browse webserver thru squid it is pointing to 172.16.0.1
as webserver.

How can I make squid not to look in to /etc/hosts file and should resolve
the local domains also thru DNS only..

Pls guide me


regds
Geetha


RE: [squid-users] delete one item from the cache

2004-12-02 Thread Chris Robertson
Squid must be set to listen on a different port than default.  I use port
8080 myself, so for me the command would be:

squidclient -p 8080 -m PURGE http://theaddressinquestion

Chris

-Original Message-
From: Peter Marshall [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 8:44 AM
To: Elsen Marc; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


This is what I have in my squid.conf file

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE


-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:05 AM
To: Peter Marshall; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache



 
 
 I need to delete one particular address from the cache .. but 
 don't know
 how.  The problem is that when the user goes to 
 http://theaddressinquestion
 it used to redirect to 
 http://theaddressinquestion/manager.html   but now it
 is supposed to redirect to 
 http://theaddressinquesion/index.html .  This
 works from outside the proxy, but for people using the proxy 
 it still goes
 to the manager page.  Any idea's how to fix this.
 
   % squidclient -m PURGE http://theaddressinquestion

   Make sure the PURGE method is allowed in squid.conf.

   M.


RE: [squid-users] delete one item from the cache

2004-12-02 Thread Peter Marshall
It is on another port .. but I tried that too ... here is my entire
squid.conf file  Do you see anything that could be causing the problem ?

Thank you very much for the sugestions and the help.

Peter


http_port 192.168.1.254:8080
http_port x.x.x.5:8081
icp_port 0
cache_mem 256 MB
cache_dir ufs /usr/local/squid/var/cache 8000 16 256
debug_options ALL,1 33,2
emulate_httpd_log on

acl public snmp_community public

acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
acl caris_int src 192.168.200.0/255.255.248.0
acl caris_dmz src x.x.x.0/255.255.255.192
acl admin_lst src 192.168.202.73/32 192.168.202.75/32
acl ALLOW_WIN_UP src 192.168.200.3/32 192.168.202.3/32 192.168.202.83
192.168.202.84 192.168.201.82/32
acl forcerobak src 192.168.100.0/24 x.x.x.50/32
acl aca src 192.168.90.0/24
acl Safe_ports port 21 80 88 443 563 2095 7778 8020 8090 8080 8081 8087 8096
8030 8194 8585 8988

http_access allow localhost
acl manager proto cache_object
http_access allow manager localhost

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE

acl snmpServer src 192.168.202.73/32
acl ICQ url_regex -i .icq.com
acl MSN req_mime_type ^application/x-msn-messenger$
acl MSN2 url_regex -i webmessenger
acl STREAM rep_mime_type ^application/octet-stream$
acl YAHOO url_regex .msg.yahoo.com
acl MICROSOFT url_regex -i .windowsupdate
acl banned_types url_regex -i .mpeg$ .mpg$ .avi$ .wmv$ .mp3$ \.rm$ .asf$
.wma$ \.ram$ \.aif$ \.ra$ .asx$
acl INTERNAL url_regex caris.priv
acl VIRUS url_regex -i genmexe.biz

http_access allow localhost

snmp_access deny !snmpServer

http_access allow !Safe_ports admin_lst
http_access allow !Safe_ports forcerobak
http_access deny !Safe_ports

http_access allow ICQ forcerobak
http_access allow ICQ aca
http_access allow ICQ admin_lst
http_access deny ICQ
http_access deny YAHOO
http_access deny VIRUS

http_access allow MICROSOFT admin_lst
http_access allow MICROSOFT forcerobak
http_access allow MICROSOFT aca
http_access allow MICROSOFT ALLOW_WIN_UP
http_access deny MICROSOFT

http_access allow banned_types admin_lst
http_access deny banned_types

http_access allow MSN forcerobak
http_access allow MSN aca
http_access allow MSN admin_lst
http_access deny MSN
http_access deny MSN2

http_access allow forcerobak
http_access allow aca
http_access allow admin_lst
http_access allow caris_int
http_access allow caris_dmz

http_access deny all





-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 2:41 PM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


Squid must be set to listen on a different port than default.  I use port
8080 myself, so for me the command would be:

squidclient -p 8080 -m PURGE http://theaddressinquestion

Chris

-Original Message-
From: Peter Marshall [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 8:44 AM
To: Elsen Marc; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


This is what I have in my squid.conf file

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE


-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:05 AM
To: Peter Marshall; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache





 I need to delete one particular address from the cache .. but
 don't know
 how.  The problem is that when the user goes to
 http://theaddressinquestion
 it used to redirect to
 http://theaddressinquestion/manager.html   but now it
 is supposed to redirect to
 http://theaddressinquesion/index.html .  This
 works from outside the proxy, but for people using the proxy
 it still goes
 to the manager page.  Any idea's how to fix this.

   % squidclient -m PURGE http://theaddressinquestion

   Make sure the PURGE method is allowed in squid.conf.

   M.



RE: [squid-users] delete one item from the cache

2004-12-02 Thread Chris Robertson
It's not listening on the loopback interface.  You've specified http_port
with IP addresses.  You have three choices:

1) Remove the ip addresses from the http_port lines (let squid listen to the
two ports on all interfaces, and just limit access using the firewall and
acls)

2) Add another http_port line (http_port 127.0.0.1 8080 # would work fine)

3) Add an acl that specifies the IP address of the server, change your PURGE
access to http_access allow PURGE server_ip, and run squidclient -h
192.168.1.254 -p 8080 -m PURGE http://purgethisurl;.

Chris

-Original Message-
From: Peter Marshall [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 9:59 AM
To: Chris Robertson; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


It is on another port .. but I tried that too ... here is my entire
squid.conf file  Do you see anything that could be causing the problem ?

Thank you very much for the sugestions and the help.

Peter


http_port 192.168.1.254:8080
http_port x.x.x.5:8081
icp_port 0
cache_mem 256 MB
cache_dir ufs /usr/local/squid/var/cache 8000 16 256
debug_options ALL,1 33,2
emulate_httpd_log on

acl public snmp_community public

acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
acl caris_int src 192.168.200.0/255.255.248.0
acl caris_dmz src x.x.x.0/255.255.255.192
acl admin_lst src 192.168.202.73/32 192.168.202.75/32
acl ALLOW_WIN_UP src 192.168.200.3/32 192.168.202.3/32 192.168.202.83
192.168.202.84 192.168.201.82/32
acl forcerobak src 192.168.100.0/24 x.x.x.50/32
acl aca src 192.168.90.0/24
acl Safe_ports port 21 80 88 443 563 2095 7778 8020 8090 8080 8081 8087 8096
8030 8194 8585 8988

http_access allow localhost
acl manager proto cache_object
http_access allow manager localhost

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE

acl snmpServer src 192.168.202.73/32
acl ICQ url_regex -i .icq.com
acl MSN req_mime_type ^application/x-msn-messenger$
acl MSN2 url_regex -i webmessenger
acl STREAM rep_mime_type ^application/octet-stream$
acl YAHOO url_regex .msg.yahoo.com
acl MICROSOFT url_regex -i .windowsupdate
acl banned_types url_regex -i .mpeg$ .mpg$ .avi$ .wmv$ .mp3$ \.rm$ .asf$
.wma$ \.ram$ \.aif$ \.ra$ .asx$
acl INTERNAL url_regex caris.priv
acl VIRUS url_regex -i genmexe.biz

http_access allow localhost

snmp_access deny !snmpServer

http_access allow !Safe_ports admin_lst
http_access allow !Safe_ports forcerobak
http_access deny !Safe_ports

http_access allow ICQ forcerobak
http_access allow ICQ aca
http_access allow ICQ admin_lst
http_access deny ICQ
http_access deny YAHOO
http_access deny VIRUS

http_access allow MICROSOFT admin_lst
http_access allow MICROSOFT forcerobak
http_access allow MICROSOFT aca
http_access allow MICROSOFT ALLOW_WIN_UP
http_access deny MICROSOFT

http_access allow banned_types admin_lst
http_access deny banned_types

http_access allow MSN forcerobak
http_access allow MSN aca
http_access allow MSN admin_lst
http_access deny MSN
http_access deny MSN2

http_access allow forcerobak
http_access allow aca
http_access allow admin_lst
http_access allow caris_int
http_access allow caris_dmz

http_access deny all





-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 2:41 PM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


Squid must be set to listen on a different port than default.  I use port
8080 myself, so for me the command would be:

squidclient -p 8080 -m PURGE http://theaddressinquestion

Chris

-Original Message-
From: Peter Marshall [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 8:44 AM
To: Elsen Marc; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


This is what I have in my squid.conf file

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE


-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:05 AM
To: Peter Marshall; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache





 I need to delete one particular address from the cache .. but
 don't know
 how.  The problem is that when the user goes to
 http://theaddressinquestion
 it used to redirect to
 http://theaddressinquestion/manager.html   but now it
 is supposed to redirect to
 http://theaddressinquesion/index.html .  This
 works from outside the proxy, but for people using the proxy
 it still goes
 to the manager page.  Any idea's how to fix this.

   % squidclient -m PURGE http://theaddressinquestion

   Make sure the PURGE method is allowed in squid.conf.

   M.


[squid-users] Unable to build V3.0-PRE3-CVS on Cygwin

2004-12-02 Thread Greg Pierce
I'm not able to build the current CVS snapshot on Cygwin.  Is there 
something I'm missing or special libraries I don't have?  I can build 
PRE3.  The error I get is:

http.cc: In member function `void HttpStateData::readReply(int, char*, 
unsigned

   int, comm_err_t, int, void*)':
http.cc:991: error: no match for 'operator!=' in 'HttpVersion(0, 9) !=
   (*(this-HttpStateData::entry-StoreEntry::_vptr$StoreEntry +
12))(this-HttpStateData::entry)-HttpReply::sline.HttpStatusLine::version'
http.cc:1010: error: no match for 'operator!=' in 'httpver != 
HttpVersion(0, 9)
   '
make[3]: *** [http.o] Error 1
make[3]: Leaving directory `/usr/src/squid-3.0-PRE3-CVS-NT/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/src/squid-3.0-PRE3-CVS-NT/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/squid-3.0-PRE3-CVS-NT/src'
make: *** [all-recursive] Error 1

Thanks, greg.


RE: [squid-users] delete one item from the cache

2004-12-02 Thread Peter Marshall
Thanks,  That let me remove what I wanted to remove ... sort of ..(still
bringing the wrong page) ... but it let me remove what I typed.  Thank you
for your help.

Peter


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 3:08 PM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


It's not listening on the loopback interface.  You've specified http_port
with IP addresses.  You have three choices:

1) Remove the ip addresses from the http_port lines (let squid listen to the
two ports on all interfaces, and just limit access using the firewall and
acls)

2) Add another http_port line (http_port 127.0.0.1 8080 # would work fine)

3) Add an acl that specifies the IP address of the server, change your PURGE
access to http_access allow PURGE server_ip, and run squidclient -h
192.168.1.254 -p 8080 -m PURGE http://purgethisurl;.

Chris

-Original Message-
From: Peter Marshall [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 9:59 AM
To: Chris Robertson; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


It is on another port .. but I tried that too ... here is my entire
squid.conf file  Do you see anything that could be causing the problem ?

Thank you very much for the sugestions and the help.

Peter


http_port 192.168.1.254:8080
http_port x.x.x.5:8081
icp_port 0
cache_mem 256 MB
cache_dir ufs /usr/local/squid/var/cache 8000 16 256
debug_options ALL,1 33,2
emulate_httpd_log on

acl public snmp_community public

acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
acl caris_int src 192.168.200.0/255.255.248.0
acl caris_dmz src x.x.x.0/255.255.255.192
acl admin_lst src 192.168.202.73/32 192.168.202.75/32
acl ALLOW_WIN_UP src 192.168.200.3/32 192.168.202.3/32 192.168.202.83
192.168.202.84 192.168.201.82/32
acl forcerobak src 192.168.100.0/24 x.x.x.50/32
acl aca src 192.168.90.0/24
acl Safe_ports port 21 80 88 443 563 2095 7778 8020 8090 8080 8081 8087 8096
8030 8194 8585 8988

http_access allow localhost
acl manager proto cache_object
http_access allow manager localhost

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE

acl snmpServer src 192.168.202.73/32
acl ICQ url_regex -i .icq.com
acl MSN req_mime_type ^application/x-msn-messenger$
acl MSN2 url_regex -i webmessenger
acl STREAM rep_mime_type ^application/octet-stream$
acl YAHOO url_regex .msg.yahoo.com
acl MICROSOFT url_regex -i .windowsupdate
acl banned_types url_regex -i .mpeg$ .mpg$ .avi$ .wmv$ .mp3$ \.rm$ .asf$
.wma$ \.ram$ \.aif$ \.ra$ .asx$
acl INTERNAL url_regex caris.priv
acl VIRUS url_regex -i genmexe.biz

http_access allow localhost

snmp_access deny !snmpServer

http_access allow !Safe_ports admin_lst
http_access allow !Safe_ports forcerobak
http_access deny !Safe_ports

http_access allow ICQ forcerobak
http_access allow ICQ aca
http_access allow ICQ admin_lst
http_access deny ICQ
http_access deny YAHOO
http_access deny VIRUS

http_access allow MICROSOFT admin_lst
http_access allow MICROSOFT forcerobak
http_access allow MICROSOFT aca
http_access allow MICROSOFT ALLOW_WIN_UP
http_access deny MICROSOFT

http_access allow banned_types admin_lst
http_access deny banned_types

http_access allow MSN forcerobak
http_access allow MSN aca
http_access allow MSN admin_lst
http_access deny MSN
http_access deny MSN2

http_access allow forcerobak
http_access allow aca
http_access allow admin_lst
http_access allow caris_int
http_access allow caris_dmz

http_access deny all





-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 2:41 PM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


Squid must be set to listen on a different port than default.  I use port
8080 myself, so for me the command would be:

squidclient -p 8080 -m PURGE http://theaddressinquestion

Chris

-Original Message-
From: Peter Marshall [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 8:44 AM
To: Elsen Marc; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache


This is what I have in my squid.conf file

acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE


-Original Message-
From: Elsen Marc [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:05 AM
To: Peter Marshall; [EMAIL PROTECTED]
Subject: RE: [squid-users] delete one item from the cache





 I need to delete one particular address from the cache .. but
 don't know
 how.  The problem is that when the user goes to
 http://theaddressinquestion
 it used to redirect to
 http://theaddressinquestion/manager.html   but now it
 is supposed to redirect to
 http://theaddressinquesion/index.html .  This
 works from outside the proxy, but for people using the proxy
 it still goes
 to the manager page.  Any idea's how to fix this.

   % squidclient -m PURGE 

[squid-users] SSL connectivity issue - please help.

2004-12-02 Thread SJH
Hi,
   I am trying to help a UK based ISP access our website.
   https://update.ucas.co.uk
   They are using squid-2.5.STABLE4 on linux. All users of
this ISP cannot access secure areas of our site. They assure 
me that access to other HTTPS sites work ok.

   I am trying to find out if this a problem just with them or
with other users of squid on this version.
   Are they any public squid proxy servers I can access ?
   We are using a Radware CT100 device to handle all our
secure web pages.
If anyone could spare 10 seconds to try the link above and
let me know if they can access it through their squid proxy
then I would be really grateful.
Many thanks
Steve
www.ucas.com

--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.289 / Virus Database: 265.4.4 - Release Date: 30/11/2004


Re: [squid-users] cache dir files

2004-12-02 Thread Kvetch
I understand that but wondered what the difference between a MEM_HIT
and a HIT was, if there was a difference.
When does squid write cache to disk?


 
  HIT means that the object is already in the cache and or on the
  disk as you write. So in that case the object has to be read,
  nothing has to be written.
 
 
  and what is the difference between TCP_MEM_HIT and TCP_HIT:NONE
 
   http://www.squid-cache.org/Doc/FAQ/FAQ-6.html#ss6.7
 
   M.
 



RE: [squid-users] SSL connectivity issue - please help.

2004-12-02 Thread Chris Robertson
Using Squid-2.5.STABLE7 direct on a Linux 2.2 kernel box works fine.  Using
Squid-2.5.STABLE3 on a FreeBSD 5.1 box works as well.  Both requests were
using the K-Meleon browser (version 0.8.2) which uses Mozilla's Gecko engine
on Windows 2000.

Chris

-Original Message-
From: SJH [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:00 AM
To: [EMAIL PROTECTED]
Subject: [squid-users] SSL connectivity issue - please help.


Hi,

I am trying to help a UK based ISP access our website.

https://update.ucas.co.uk

They are using squid-2.5.STABLE4 on linux. All users of
this ISP cannot access secure areas of our site. They assure 
me that access to other HTTPS sites work ok.

I am trying to find out if this a problem just with them or
with other users of squid on this version.

Are they any public squid proxy servers I can access ?

We are using a Radware CT100 device to handle all our
secure web pages.

If anyone could spare 10 seconds to try the link above and
let me know if they can access it through their squid proxy
then I would be really grateful.

Many thanks

Steve
www.ucas.com



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.289 / Virus Database: 265.4.4 - Release Date: 30/11/2004


RE: [squid-users] cache dir files

2004-12-02 Thread Chris Robertson
TCP_HIT:  A valid copy of the requested object was in the cache, but not in
memory.
TCP_MEM_HIT:  A valid copy of the requested object was in the cache and it
was in memory, thus avoiding disk accesses (read).

The copy of the site will be cached during a TCP_MISS.

Chris

-Original Message-
From: Kvetch [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 11:04 AM
To: [EMAIL PROTECTED]
Subject: Re: [squid-users] cache dir files


I understand that but wondered what the difference between a MEM_HIT
and a HIT was, if there was a difference.
When does squid write cache to disk?


 
  HIT means that the object is already in the cache and or on the
  disk as you write. So in that case the object has to be read,
  nothing has to be written.
 
 
  and what is the difference between TCP_MEM_HIT and TCP_HIT:NONE
 
   http://www.squid-cache.org/Doc/FAQ/FAQ-6.html#ss6.7
 
   M.
 



[squid-users] Re: Squid limits and hardware spec

2004-12-02 Thread Adam Aube
Martin Marji Cermak wrote:

 I have been playing with Squid under a heavy load and there are some
 stats. I am trying to maximise the Byte Hit Ratio value. I got 13%
 average, but I am not happy about this number - I want it higher

To increase your byte hit ratio, you can:

1) Switch to one of the heap cache replacement policies
2) Tune your refresh_pattern settings to make Squid cache more aggressively

See the FAQ and default squid.conf for details on these items.

However, before going through the tuning, run an analysis tool (such as
Calamaris) on your logs to see what your traffic pattern is like. This will
show you what a reasonable byte hit ratio would be.

If, for example, 70% of your traffic is dynamic content (which usually
cannot be cached), then a 13% byte hit ratio is actually pretty good.

 USED HARDWARE:
 Processor: P4 1.8GHz
 Memory:1 GB
 Hardisk:   40 GB IDE 7200rpm

 Requests: 180 req/sec (peak), 60 req/sec (day average).

According to posts from Squid developers, a single caching Squid box has an
upper limit of about 300 - 400 requests/second. This isn't too bad,
considering you are using a single IDE disk for the entire system.

 maximum_object_size 51200 KB (SHOULD I MAKE IT HIGHER ???)

Actually, you might want to make it lower. Most web requests will not be for
50 MB files, and your byte hit ratio will be hurt if a 50 MB file that is
requested once forces out fifty 1 MB files that are accessed twice each.

The default is generally acceptable, unless log analysis shows large numbers
of requests for larger files.

 cache_dir aufs /cache 25000 16 256

You should size your cache to hold about a week's worth of traffic. Just
watch your memory usage (1 GB of cache ~ 10 MB of memory for metadata).

 cache_mem 8 MB

This is generally fine - the OS will generally use free memory to cache
files anyway, which will have the same effect as boosting this setting.

 I am going to install a new box with SCSI disks so I will report to you
 how the performance will change.

Best disk performance will be achieved with multiple small, fast SCSI disks
dedicated to Squid's cache, each with its own cache_dir (no RAID), and
round-robin between the cache_dirs.

Adam



[squid-users] squid_ldap_auth issues

2004-12-02 Thread Ron Bettle
ok lemme start by saying that i have read and searched
the archives but i still cant figure this out ;-).

what im trying to do is reproduce the old smb_auth
functionality with the 'new' active directory LDAP.
unfortunately i have no control over the LDAP nor do i
really understand LDAP that well, but im learning ;-).

ok here is what i have so far. after much reading and
searching through the archives i have come up with the
following.

/usr/lib/squid/squid_ldap_auth -b
cn=nameofmydc,ou=domain
controllers,dc=mydomain,dc=net -D cn=Bettle\,
Ron,ou=Users,ou=SOS,ou=Facilities,dc=mydomain,dc=net
-w mypassword -f ((CN=%u)(objectClass=person)) -H
ip.of.my.dc

i hit enter, then i type in my user.name and
mypassword and all i get is ERR.

what am i doing wrong? does squid_ldap_auth have a
debug function? the old smb_auth script had a nice -d
flag that gave some nice trouble shooting information.

thank you for your time.


[squid-users] just cache images

2004-12-02 Thread Dan
I'm looking to setup a simple image proxy that only
caches certain types of images.

for example i want to only cache 1 websites images and
not any other website.

so say someone goes to example.com and views a page
with  
index.asp, hello.gif, hello.png

how could i set it up to only cache *.gif on
example.com and not on test.com or anything else?


Thank you very much for your help and advice

Dan



__ 
Do you Yahoo!? 
Jazz up your holiday email with celebrity designs. Learn more.
http://celebrity.mail.yahoo.com


[squid-users] Re: squid_ldap_group authorisation of 2000 AD Groups

2004-12-02 Thread Adam Aube
Oliver Hookins wrote:

 Here's the real question - is it actually possible to have group
 AUTHORISATION without requiring the user to enter any login details
 (AUTHENTICATION), i.e. the username comes from Windows or something?

How is Squid supposed to check for membership in a group if it has no
username to check the membership of?

[email disclaimer snipped]

If at all possible, could you please turn the disclaimer off? When posting
to public mailing lists, the disclaimer is pointless (and somewhat
annoying).

Adam



RE: [squid-users] just cache images

2004-12-02 Thread Chris Robertson
Set the server you want cached with:

  acl cacheThisServer dstdomain .example.com

And since you just want to cache images, I think:

  acl images rep_mime_type -i ^image/

will work.

Then pull it all together with:

  no_cache deny !cacheThisServer !images

Voila!  Don't cache anything but images from the cacheThisServer.  Can
someone verify my logic?  Will rep_mime_type work for the no_cache acl?
Otherwise, you'd have to do it via filename extension (e.g. .jpg, .gif,
etc), and that sounds like a lot more typing...

Chris

-Original Message-
From: Dan [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 02, 2004 1:44 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] just cache images


I'm looking to setup a simple image proxy that only
caches certain types of images.

for example i want to only cache 1 websites images and
not any other website.

so say someone goes to example.com and views a page
with  
index.asp, hello.gif, hello.png

how could i set it up to only cache *.gif on
example.com and not on test.com or anything else?


Thank you very much for your help and advice

Dan



__ 
Do you Yahoo!? 
Jazz up your holiday email with celebrity designs. Learn more.
http://celebrity.mail.yahoo.com


[squid-users] Re: More flexible logging options?

2004-12-02 Thread Adam Aube
Shawn Wright wrote:

 During times when our proxy is being assaulted by spyware, it spends a
 great deal of CPU time logging these denials. I would like to explore the
 possibility of one or more of the following:

 -limiting max # of connections allocated to a single IP per minute, since
 delay pools won't help when all the connections are denials (I don't
 think).

The maxconn acl type can do this, though I believe Squid will still log a
TCP_DENIED for each request over the limit. Probably not the solution you
are looking for.

You could use a program to tail the access.log (a simple Perl script could
do it) and block an IP address using the OS's firewall if the number of
denied requests passes a certain threshold.

Adam



[squid-users] Re: squid_ldap_group authorisation of 2000 AD Groups

2004-12-02 Thread Adam Aube
Adam Aube wrote:

 Oliver Hookins wrote:
 
 Here's the real question - is it actually possible to have group
 AUTHORISATION without requiring the user to enter any login details
 (AUTHENTICATION), i.e. the username comes from Windows or something?
 
 How is Squid supposed to check for membership in a group if it has no
 username to check the membership of?

There is support for NTLM (aka Windows Integrated Authentication), but it
has some limitations:

1) It only fully works with IE
2) NTLM-over-HTTP is horribly broken - see the list archives for details

Adam



Re: [squid-users] squid_ldap_group with users in several OUs

2004-12-02 Thread Oliver Hookins
[EMAIL PROTECTED] wrote:

Hi oliver-
Try adjusting your squid_ldap_group query just after -b
cn=Users,dc=domain,dc=local to include -s sub to search all
subcontainers.
According to the man page, the search scope defaults to sub. I don't 
believe it is a search scope problem anyway, perhaps a problem of 
assembling the user DN from the base DN and username.

Let me fire a question at you-
I am trying to use squid_ldap_group to query Novell eDirectory via LDAP for
multiple group memberships.
I am fuzzy on how the search filter is used, and I see in your filter that
you use variables other than %s that was referred to in some material I
read.
What is %g, and what is %u?  What is the difference between little f
and big F in your search filter?  I can find no documentation on big F.
I think this is the key I need to understand squid_ldap_group
The -b and -f parameters specify the base DN and search filter for the 
groups while the -B and -F parameters specify the base DN and search 
filter for the users. %u is the username (or user DN if -F or -u is 
specified), %g is the group name, %s is the username. This is all in the 
man page for squid_ldap_group by the way.

I can get it all working if the users and groups are actually in the 
base DN container that I specify. However if they are somewhere higher 
in the tree it won't work.

Regards,
Oliver
This communication is intended only for the person or entity to which it is 
addressed and may contain confidential and/or privileged material.  Any review, 
retransmission, dissemination or other use of, or taking any action in reliance 
on, this communication by persons or entities other than the intended recipient 
is prohibited. Exhibition IT Services Pty LTD makes no express or implied 
representation or warranty that this electronic communication or any attachment 
is free from computer viruses or other defects or conditions which could damage 
or interfere with the recipients data, hardware or software.  This 
communication and any attachment may have been modified or otherwise interfered 
with in the course of transmission.


[squid-users] Re: Multiple reply_body_max_size entries

2004-12-02 Thread Adam Aube
Marco D'Ettorre wrote:

 I noted that acl based reply_body_max_size works correctly ONLY if I add
 the acl in one of http_access directives!

 My acl is an EXTERNAL one (ldap_group in particular).

 reply_body_max_size 100 allow !unlimited

 This works only if I add
 http_access allow unlimited !unlimited
 that is never true and doesn't alter my access rules.

Likely reply_body_max_size is a fast lookup that cannot wait for slow
lookups, such as external helper calls, to complete. Using the acl in
http_access makes it already availabe to Squid, eliminating the wait in
reply_body_max_access.

Adam



RE: [squid-users] just cache images

2004-12-02 Thread Dan
That looks great, and seems to be working ok. just let
me run to examples by you real quick

Cached
TCP_MEM_HIT/200 6230 GET www.example.com/logo.jpg -
NONE/- image/jpeg

Not Cached
TCP_MISS/200 16347 GET www.example.com/photo_2.jpg -
DIRECT/1.1.1.1 image/jpeg

is that correct?

and if i wanted so add another domain name would i do
it like example 1 or example 2?
  acl cacheThisServer1 dstdomain .example2.com
  acl cacheThisServer dstdomain .example.com
  acl images rep_mime_type -i ^image/

# example 1 -
  no_cache deny !cacheThisServer !cacheThisServer1
!images
# example 2 - 
  no_cache deny !cacheThisServer !images
  no_cache deny !cacheThisServer1 !images

Thanks so much for your great advice and fast response

Dan
--- Chris Robertson [EMAIL PROTECTED] wrote:

 Set the server you want cached with:
 
   acl cacheThisServer dstdomain .example.com
 
 And since you just want to cache images, I think:
 
   acl images rep_mime_type -i ^image/
 
 will work.
 
 Then pull it all together with:
 
   no_cache deny !cacheThisServer !images
 
 Voila!  Don't cache anything but images from the
 cacheThisServer.  Can
 someone verify my logic?  Will rep_mime_type work
 for the no_cache acl?
 Otherwise, you'd have to do it via filename
 extension (e.g. .jpg, .gif,
 etc), and that sounds like a lot more typing...
 
 Chris
 
 -Original Message-
 From: Dan [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 02, 2004 1:44 PM
 To: [EMAIL PROTECTED]
 Subject: [squid-users] just cache images
 
 
 I'm looking to setup a simple image proxy that only
 caches certain types of images.
 
 for example i want to only cache 1 websites images
 and
 not any other website.
 
 so say someone goes to example.com and views a page
 with  
 index.asp, hello.gif, hello.png
 
 how could i set it up to only cache *.gif on
 example.com and not on test.com or anything else?
 
 
 Thank you very much for your help and advice
 
 Dan
 
 
   
 __ 
 Do you Yahoo!? 
 Jazz up your holiday email with celebrity designs.
 Learn more.
 http://celebrity.mail.yahoo.com
 





__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail


[squid-users] Re: external helper authorisation to a NT trusted domain

2004-12-02 Thread Adam Aube
Grund, Andreas wrote:

 I have a authorisation problem using external helper wbinfo_group.pl. We
 have 2 trusted domains DOM_A and DOM_B (NT4 Domains). Authorisation to
 DOM_A (squid server is member of DOM_A) works fine, but users belonging to
 DOM_B couldn't be authorized.

 For example: 'userB' belonging to group 'grpB' in domain 'DOM_B' tries to
 open a page. Now wbinfo_group gets 'DOM_B+userB grpB' and is sending 'ERR'
 to quid (could not lookup name).

 If the parameter would be 'DOM_B+userB DOM_B+grpB', everything would be
 fine (at least regarding my tests using wbinfo_group.pl directly from
 shell). 

 # squid_auskunftD2 is global group in DOM_A
 acl _auskunftD1_user external NT_global_group squid_auskunftD1
 # squid_auskunftD2 is global group in DOM_B
 acl _auskunftD2_user external NT_global_group squid_auskunftD2

What if you changed these acls to be:

acl _auskunftD1_user external NT_global_group DOM_A+squid_auskunftD1
acl _auskunftD2_user external NT_global_group DOM_B+squid_auskunftD2

Adam




[squid-users] Re: PLZ HELP 4 DELAY_POOLS

2004-12-02 Thread Adam Aube
Shiraz Gul Khan wrote:

 i have 256CIR DSL line and i have 100 users. i want to use DELAY_POOLS for
 slowing downloading speed at user end, with the download file extention
 .exe .dat .zip .avi.

The Delay Pools FAQ would be a good place to start:

http://www.squid-cache.org/Doc/FAQ/FAQ-19.html#ss19.8

Adam



[squid-users] Re: SSL connectivity issue - please help.

2004-12-02 Thread Adam Aube
SJH wrote:

 I am trying to help a UK based ISP access our website.
 
 https://update.ucas.co.uk
 
 They are using squid-2.5.STABLE4 on linux. All users of
 this ISP cannot access secure areas of our site. They assure
 me that access to other HTTPS sites work ok.

 I am trying to find out if this a problem just with them or
 with other users of squid on this version.

 We are using a Radware CT100 device to handle all our
 secure web pages.

Have them check to see if ECN is enabled on their Squid server. If it is,
have them try turning it off. Some brain dead devices have problems
handling the ECN flag.

Adam



[squid-users] Re: squid time acls + auth acls

2004-12-02 Thread Adam Aube
Alberto Sierra wrote:

 i want to achieve the following goals:
 
 VIP clients (10.1.1.40-10.1.1.50) always internet
 supervisors (using username/password)
 rest of the people: time acl-dependent

 my current setup is this:

 acl safe_list dstdomain /etc/squid/safe_list
 acl ViP src 10.1.1.40-10.1.1.50/32
 acl work time MTWHF 15:00-19:50
 acl sat time A 00:00-23:59
 acl time1 time S 09:00-10:10
 acl time2 time S 11:15-11:45
 acl time3 time S 12:30-13:20
 acl time4 time S 14:45-15:15
 acl all src 0.0.0.0/0.0.0.0
 acl localhost src 127.0.0.1/255.255.255.255
 
 http_access allow ViP
 http_access allow safe_list
 # deny access to dansguardian by time:
 http_access deny localhost time1
 http_access deny localhost time2
 http_access deny localhost time3
 http_access deny localhost time4

 now, i have first to uncomment the line
 
 acl password proxy_auth REQUIRED
 http_access allow localhost password
 
 for the authentication to work, but my question then is if i can put 3
 ACLs together like:
 
 http_access deny localhost password !work
 
 or how can i address this situation??

That is valid syntax, but will not achieve what you seem to want. That will
block access to authenticated users outside the times defined by work. To
get what you want, you should have:

http_access allow localhost work
http_access allow localhost password
http_access deny all

BTW, why are you using localhost in all the http_access lines? You do
realize that will only match if the client is running on the same physical
system as Squid, right?

Adam



[squid-users] basic program authentication setting for squid_ldap_auth, am I right in my configuration?

2004-12-02 Thread Yong Bong Fong
Dear all,
   I am confused about the configuration of squid_ldap_auth in squid.conf.
Below is the format of the ldap built by my system administrator, he 
wants me to set up ldap authentication through squid.

DN:cn=root, dc=shinyang, dc=com, dc=my
|
|DN:ou=qmail_users, cn=root, dc=shinyang, dc=com, dc=my
|
|DN:cn=bfyong, ou=qmail_users,cn=root, 
dc=shinyang,dc=com,dc=my


I understand most steps about setting up ldap for squid, except the 
section that I have about :
auth_param basic program in squid.conf.

*In my squid.conf I set:
auth_param basic program  /usr/lib/squid/squid_ldap_auth -b dc=shinyang, 
dc=com, dc=my -D cn=root,dc=shinyang,dc=com,dc=my -w 
-f((objectclass=person)(cn=%s)) -h 172.16.0.21

*Does it look right based on the LDAP tree I supplied above?
or is it as
*/usr/lib/squid/squid_ldap_auth -b -h 172.16.0.21 -D 
cn=root,dc=shinyang,dc=com,dc=my -f ((objectclass=person)(cn=%s)

*or is it
*/usr/lib/squid/squid_ldap_auth -b o=root -h 172.16.0.21 -D 
cn=bfyong,ou=qmail_users,o=root -w bfyongpassword -f 
((objectclass=person)(cn=%s))

*Is any one of the above right? if not...can please show me how to get 
the right configuration
thanks all...





RE: [squid-users] squid to resolve local domain with out seeing / etc/hosts

2004-12-02 Thread Mrs. Geeta Thanu
Thanks chris. when i included the line hosts_file none
it worked fine and the problem is solved.

regds
Geetha

On Thu, 2 Dec 2004, Chris Robertson wrote:

 In your squid.conf add a line like:

 hosts_file none

 Chris

 -Original Message-
 From: Mrs. Geeta Thanu [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 02, 2004 1:35 AM
 To: [EMAIL PROTECTED]
 Subject: [squid-users] squid to resolve local domain with out seeing
 /etc/hosts


 Hi all,

 I have installed squid and is working fine for many days.
 Recently we shifted our web server to a IP say 172.16.2.80.

 but the system in which squid is running has in its /etc/hosts file

 172.16.0.1 webserver

  I want this entry not to be changed becos of some
 other purpose.

 so now if anybody browse webserver thru squid it is pointing to 172.16.0.1
 as webserver.

 How can I make squid not to look in to /etc/hosts file and should resolve
 the local domains also thru DNS only..

 Pls guide me


 regds
 Geetha




[squid-users] Im confused

2004-12-02 Thread Ali Abbas

  Basically a cached object is:
#
#   FRESH if expires  now, else STALE
#   STALE if age  max
#   FRESH if lm-factor  percent, else STALE
#   FRESH if age  min
#   else STALE
#


This a sort of algorithm I read in the squid.conf file of Squid 2.5 Stable7.
What I don't get is the first line states that an object is fresh if the
expiry time is less than the current time. As I see it if the expiry time
has passed and the current time is larger than that how come ab object can
be fresh or else stale.




Mohammed Ali Abbas
Asst. CMTS Engineer



Re: [squid-users] Squid limits and hardware spec

2004-12-02 Thread Matus UHLAR - fantomas
 On Mon, 2004-11-29 at 11:32, Martin Marji Cermak wrote:
 
 Hello guys,
 I have been playing with Squid under a heavy load and there are some 
 stats.
 I am trying to maximise the Byte Hit Ratio value. I got 13% average, 
 but I am not happy about this number - I want it higher (how to do it?). 
 There are thousands of ADSL clients using the cache and I want to know 
 what the Squid limits are.
 
 USED HARDWARE:
 Processor: P4 1.8GHz
 Memory:1 GB
 Hardisk:   40 GB IDE 7200rpm
 Controler: Serverworks Chipset
 Ethernet card: Broadcom TG3
 
 ACHIEVED PERFORMANCE:
 Byte Hit Ratio: 13% (TOO LOW !!!)

 Ow Mun Heng wrote: You want to save bandwidth or you want speed??

On 02.12 13:13, Martin Marji Cermak wrote: Yes, I want to Save bandwidth.

In such case you probably need: bigger cache (add new disk probably) lower
- maximum_object_size cache_replacement_policy heap LFUDA

explanations below

 USED CONFIGURATION: maximum_object_size 51200 KB (SHOULD I MAKE IT
 HIGHER ???)
 
 I made mine to cache up to 40MB only. If you really want to have more
 byte hit ratio, then by all means, up the max_obj_size.
 
 OK, now I have: maximum_object_size 200 MB

I increased maximum_object_Size from 20MB (last time I verified what files
were repeatedly fetched they were under 20 MB) to 32 MB and the byte hit
ratio decreased.

Yes, I work by an ISP where customers use to fetch very different files,
that is expected. I don't know what situation you are in, but note that
one 50MB file takes space of 50 1MB files and there is big probability
that smaller files will be fetched more often.

 cache_dir aufs /cache 25000 16 256 (one ide disk, see the spec above)

 This seems too low. I used 40GB of the 80GB drive

 OK, I changed it to cache_dir aufs /cache 92000 16 256

no no no, even if you have whole drive for the cache, you should note that
there is some overhead in filesystems etc. I'm glad that I may use 3
kB (which is a bit less than 29GB) on 36GB hdd. You probably should use:

cache_dir aufs /cache 7 64 256

 cache_mem 8 MB
 200 MB. More being cached to memory. Faster retrieval.
 Thank you, nice. I just hope it does not start swaping :-)

when I had 3 swap I used 300MB for memory cache, and had squid taking
850MB of RAM. I think you may use 100 or 128 M for memory cache and see
how much memory will squid take in few days or weeks.

 And another interesting thing:

 My median Byte Hit Ratio has reached 17% (200 MB max file, 95 GB cache).
 So I drecompiled squid with --enable-removal-policies and set:
   cache_replacement_policy heap LFUDA
 It looks I can gain a couple of percent (LFUDA should have a bit better 
 Byte Hit Ratio than lfu).

This is a well known thing. See squid config, cache_replacement_policy
comments, you'll found out that LFUDA is the best for good byte ratio.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
There's a long-standing bug relating to the x86 architecture that
allows you to install Windows.   -- Matthew D. Fuller


Re: [squid-users] ClamAV information needed, any recommendation?

2004-12-02 Thread Ow Mun Heng
On Thu, 2004-12-02 at 14:20, Nigel Horne wrote:
 On Thursday 02 Dec 2004 06:18, Yong Bong Fong wrote:
  Dear all,
  
 I am trying to find a good step by step or How-to guide about 
  installation and everything about ClamAV, does anyone know where can I 
  get it? I found the official site of ClamAV but seems like the 
  information in there is quite limited.
 
 You'll find lots of help on the clamav-users mailing list, see www.clamav.net
 for details.

Hmm.. are you looking at integrating squid and clamav?? If not, why is
this being posted here?

I've no experience in running clamav with squid and neither do I think
that it is worthwhile. 

Most viruses come in via email anyway.

And running clamav on a squid box will surely grind it to a screeching
halt due to the extra processing overhead

 
  Thanks all

--
Ow Mun Heng
Gentoo/Linux on D600 1.4Ghz 
Neuromancer 16:02:42 up 6:13, 6 users, 0.59, 0.66, 0.49 



Re: [squid-users] Squid limits and hardware spec

2004-12-02 Thread Ow Mun Heng
On Thu, 2004-12-02 at 16:04, Matus UHLAR - fantomas wrote:
  On Mon, 2004-11-29 at 11:32, Martin Marji Cermak wrote:
  
  Hello guys,
  I have been playing with Squid under a heavy load and there are some 
  stats.
  I am trying to maximise the Byte Hit Ratio value. I got 13% average, 
  but I am not happy about this number - I want it higher (how to do it?). 
  There are thousands of ADSL clients using the cache and I want to know 
  what the Squid limits are.
  
  USED HARDWARE:
  Processor: P4 1.8GHz
  Memory:1 GB
  Hardisk:   40 GB IDE 7200rpm
  Controler: Serverworks Chipset
  Ethernet card: Broadcom TG3
  
  ACHIEVED PERFORMANCE:
  Byte Hit Ratio: 13% (TOO LOW !!!)
 
  Ow Mun Heng wrote: You want to save bandwidth or you want speed??
 
 On 02.12 13:13, Martin Marji Cermak wrote: Yes, I want to Save bandwidth.
 
 In such case you probably need: bigger cache (add new disk probably) lower
 - maximum_object_size cache_replacement_policy heap LFUDA
 
 explanations below
 
  USED CONFIGURATION: maximum_object_size 51200 KB (SHOULD I MAKE IT
  HIGHER ???)
  
  I made mine to cache up to 40MB only. If you really want to have more
  byte hit ratio, then by all means, up the max_obj_size.
  
  OK, now I have: maximum_object_size 200 MB
 
 I increased maximum_object_Size from 20MB (last time I verified what files
 were repeatedly fetched they were under 20 MB) to 32 MB and the byte hit
 ratio decreased.

Well, if he really wants to save on bandwidth and have the cache big
enough, I don't see why having a max obj size of 200MB to be not good.

Then again, it depends on usage.

 
 Yes, I work by an ISP where customers use to fetch very different files,
 that is expected. I don't know what situation you are in, but note that
 one 50MB file takes space of 50 1MB files and there is big probability
 that smaller files will be fetched more often.

True. One 50MB file = 50x hit ratio of 50 1MB file.

Thus, higher Byte_hit_ratio

 
  cache_dir aufs /cache 25000 16 256 (one ide disk, see the spec above)
 
  This seems too low. I used 40GB of the 80GB drive
 
  OK, I changed it to cache_dir aufs /cache 92000 16 256
 
 no no no, even if you have whole drive for the cache, you should note that
 there is some overhead in filesystems etc. I'm glad that I may use 3
 kB (which is a bit less than 29GB) on 36GB hdd. You probably should use:

Proper usage is to use ~70-80% MAX of the physical HD space being
allocated for the cache.


--
Ow Mun Heng
Gentoo/Linux on D600 1.4Ghz 
Neuromancer 16:11:04 up 6:21, 6 users, 0.57, 0.53, 0.46 



[squid-users] Re: Reverse Proxy SSL + Certificates

2004-12-02 Thread David Delamarre
Hello

i configured Squid as a reverse proxy for HTTP. all is ok. I tried
with https and it is not working. I compiled with the option
enable-ssL ,and  with this version of squid it is not working ... I
want squid be full transparent for certificate.

no idea ... if someone can help me please

Thanks


[squid-users] ClamAV with Squid Possible? Was [Re: [squid-users] ClamAV information needed, any recommendation?]

2004-12-02 Thread Ow Mun Heng
On Thu, 2004-12-02 at 16:54, Yong Bong Fong wrote:
 Hi Ow,
  
I forgot to mentioned that I am looking to integrate Squid with 
 ClamAV (if thats possible).
As I mentioned, I'm not sure if it's possible but what I do know and
_fear_ is the extra processing overhead associated with virus scanning
on a Squid Box. 

Unless you've got lots of muscle on the box.

 Actually my aim is to integrate something onto squid that makes squid 
 antiviral capable, 
I've frankly never heard of such an integration.

 and also able to block downloaded files with certain 
 extensions such as mp3, exe etc.

You can already do that with squid itself.

There are a few ways

1. Using regular expressions
acl NoAudioFiles req_mime_type -i ^audio/mpeg
or
acl NoAudioFiles rep_mime_type -i ^ audio/mpeg
or
acl NoAudioFiles url_regex ^\.mp3$
and
http_access NoAudioFiles deny
http_reply_access deny NoAudioFiles

req_mime_type = Content-Type Header of the client's HTTP request.

rep_mime_type=This is based on the response from the origin server.
(might be better then req_mime_type which only is useful to prohibit
PUT/POST requests)


 
 Thats what my system administrator told me, but seems lke ClamAV is for 
 use with qmail. And only use for antivirus, not for blocking files with 
 certain extensions.

I believe that clamav or qmail can be configured to remove those
particular extension files.

 
 Thanks anyway Ow,
 
 
 Ow Mun Heng wrote:
 
 On Thu, 2004-12-02 at 14:20, Nigel Horne wrote:
   
 
 On Thursday 02 Dec 2004 06:18, Yong Bong Fong wrote:
 
 
 Dear all,
 
I am trying to find a good step by step or How-to guide about 
 installation and everything about ClamAV, does anyone know where can I 
 get it? I found the official site of ClamAV but seems like the 
 information in there is quite limited.
   
 
 You'll find lots of help on the clamav-users mailing list, see 
 www.clamav.net
 for details.
 
 
 
 Hmm.. are you looking at integrating squid and clamav?? If not, why is
 this being posted here?
 
 I've no experience in running clamav with squid and neither do I think
 that it is worthwhile. 
 
 Most viruses come in via email anyway.
 
 And running clamav on a squid box will surely grind it to a screeching
 halt due to the extra processing overhead
 
   
 
 Thanks all
   
 
 
 --
 Ow Mun Heng
 Gentoo/Linux on D600 1.4Ghz 
 Neuromancer 16:02:42 up 6:13, 6 users, 0.59, 0.66, 0.49 
 
 
 
   
 

--
Ow Mun Heng
Gentoo/Linux on D600 1.4Ghz 
Neuromancer 17:12:49 up 7:23, 6 users, 1.27, 0.56, 0.31 



RE: [squid-users] Re: Reverse Proxy SSL + Certificates

2004-12-02 Thread Elsen Marc

 
 
 Hello
 
 i configured Squid as a reverse proxy for HTTP. all is ok. I tried
 with https and it is not working. I compiled with the option
 enable-ssL ,and  with this version of squid it is not working ... I
 want squid be full transparent for certificate.
 
 no idea ... if someone can help me please
 
 'not working' is an awfull concept in the complex IT area of
the past Millenium ages :

 Clarify  , items interesting :

- squid version ?
- os/platform/version ?
- which errors are, seen , what was tried ?
- cache.log entries, e.d,
- and so on...

 M.


Re: [squid-users] Re: Reverse Proxy SSL + Certificates

2004-12-02 Thread Ow Mun Heng
On Thu, 2004-12-02 at 17:22, David Delamarre wrote:
 Hello
 
 i configured Squid as a reverse proxy for HTTP. all is ok. I tried
 with https and it is not working. I compiled with the option
 enable-ssL 
what does the log file say?

 ,and  with this version of squid it is not working ... I
Was it working with a previous version??

 want squid be full transparent for certificate.
 
 no idea ... if someone can help me please
 
 Thanks

--
Ow Mun Heng
Gentoo/Linux on D600 1.4Ghz 
Neuromancer 17:26:53 up 7:37, 7 users, 1.08, 0.61, 0.40 



[squid-users] Multicast Question

2004-12-02 Thread J Thomas Hancock
First off I would like to thank everyone for their input on my previous disk
configuration question.

I am in the process of setting up a small army of transparent caching
servers.  I am using Fedora Core 3 as the OS and Squid version 2.5.7
compiled from source.  The servers each have 2 NICs.  I plan on using NIC1
for clients and fetching the information from the internet.  I want to use
NIC 2 to share the cache with the other Squid servers.

I currently have a peer_cache setup that seems to be working.  I would like
to change it to a multicast group.

Is there a significant performance increase?  Instead of sending an ICP
query to each peer, I only send the packet out once, but I will still get
the same number of responses.

Will multicast scale better than just using a peer relationship?

Finally, can I use a private IP subnet like 192.168.1.0/24 for the second
NICs and the multicast address or must I use addresses in the multicast
space?

Thank you,
Tom Hancock





Re: [squid-users] Multicast Question

2004-12-02 Thread Ow Mun Heng
On Thu, 2004-12-02 at 17:44, J Thomas Hancock wrote:
 First off I would like to thank everyone for their input on my previous disk
 configuration question.
 
 I am in the process of setting up a small army of transparent caching
 servers.  I am using Fedora Core 3 as the OS and Squid version 2.5.7
 compiled from source.  The servers each have 2 NICs.  I plan on using NIC1
 for clients and fetching the information from the internet.  I want to use
 NIC 2 to share the cache with the other Squid servers.
 
 I currently have a peer_cache setup that seems to be working.  I would like
 to change it to a multicast group.

I don't know. checking on the O'reilly book - Squid: A Definitive guide, the 
author says multicast ICP is unreliable.

YMMV



[squid-users] external helper authorisation to a NT trusted domain

2004-12-02 Thread Grund, Andreas
using SAMBA 3.0.9 and SQUID 2.5.STABLE7

I have a authorisation problem using external helper wbinfo_group.pl. We
have 2 trusted domains DOM_A and DOM_B (NT4 Domains). Authorisation to DOM_A
(squid server is member of DOM_A) works fine, but users belonging to DOM_B
couldn't be authorized. This happens, cause squid never sends a fully
qualified group name and it seems that wbinfo_group.pl needs the fully
qualified name, otherwise it doesn't recognize domain groups in the trusted
domain. For example: 'userB' belonging to group 'grpB' in domain 'DOM_B'
tries to open a page. Now wbinfo_group gets 'DOM_B+userB grpB' and is
sending 'ERR' to quid (could not lookup name). If the parameter would be
'DOM_B+userB DOM_B+grpB', everything would be fine (at least regarding my
tests using wbinfo_group.pl directly from shell).
Anybody an idea how to fix this problem? Maybe this is a just a
configuration issue? Here are the relevant config lines:

smb.conf -

[global]
workgroup = dom_a
security = domain
password server = 192.168.1.2
wins support = yes 
max log size = 1
local master = no
winbind enum users = yes
winbind enum groups = yes
winbind use default domain = no 
idmap uid = 1-2
idmap gid = 1-2
winbind separator = + 
[..]

squid.conf -

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 3
auth_param ntlm max_challenge_reuses 1
auth_param ntlm max_challenge_lifetime 2 minute
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic realm Squid proxy-caching web server
auth_param basic children 3
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 minute

external_acl_type NT_global_group children=10 ttl=900 %LOGIN
/usr/local/squid/libexec/wbinfo_group.pl

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

coredump_dir /usr/local/squid/var/cache

[..]
acl squid_user external NT_global_group squid_user
acl _grp_allowed_sites dstdomain /etc/squid/sites_auskunft

# squid_auskunftD1 is global group in DOM_A
acl _auskunftD1_user external NT_global_group squid_auskunftD1
# squid_auskunftD2 is global group in DOM_B
acl _auskunftD2_user external NT_global_group squid_auskunftD2

[..]
http_access allow _grp_allowed_sites _auskunftD1_user
http_access allow _grp_allowed_sites _auskunftD2_user
[..]

http_reply_access allow all
icp_access allow all
http_access deny all

Regards,
Andreas Grund


[squid-users] Re: AW: Re: URGENT: trouble connecting to certain sites ...

2004-12-02 Thread Evert Meulie
Add. info: Most problems started when the uplink got switched to another 
ISP, who requires me to use PPPoE. Could PPPoE (and MTU/MRU) have to do 
with this...?

Regards,
Evert
Evert Meulie wrote:
Strange... On the network with the problem-Squid I tried to connect to 
http://www.inpoc.no/ and checked the logs of Squid: nothing showing 
there. Then I checked the firewall: There I DO see traffic coming from 
www.inpoc.no and going to the Squid-machine...!   :-?


Evert Meulie wrote:
Hmm, it's getting weirder here... Another Squid server I have access 
to IS able to access the sites I mentioned... So somewhere in the 
first server (or perhaps the firewall?) there is something funky going 
on...
The complication is that I don't really see a pattern in the sites it 
is not able to access...   :-/

Well, I'll do some more bug-hunting and will keep you all informed on 
the outcome!  ;)

Regards,
   Evert

[EMAIL PROTECTED] wrote:
Works for me with squid.
Squid Cache: Version 2.5.STABLE4
configure options:  --enable-auth=ntlm,basic
--enable-external-acl-helpers=winbi
nd_group --enable-basic-auth-helpers=winbind
--enable-ntlm-auth-helpers=winbind
--prefix=/usr/local/squid --with-samba-sources=/usr/local/samba-2.2.5
access.log shows:
1101798525.950968 10.23.6.231 TCP_MISS/302 545 GET
http://www.chessworld.net
/ GM127558 FIRST_UP_PARENT/fu0270.zff.zf-group.de text/html
1101798526.135185 10.23.6.231 TCP_MISS/200 11757 GET
http://www.chessworld.n
et/chessclubs/asplogin.asp GM127558 
FIRST_UP_PARENT/fu0270.zff.zf-group.de
text/
html

Mit freundlichem Gruß/Yours sincerely
Werner Rost
GM-FIR - Netzwerk
 
ZF Boge Elastmetall GmbH
Friesdorfer Str. 175, 53175 Bonn, Deutschland/Germany
Telefon/Phone +49 228 3825 - 420
Telefax/Fax +49 228 3825 - 398
[EMAIL PROTECTED]
 

-Ursprüngliche Nachricht-
Von: Evert Meulie [mailto:[EMAIL PROTECTED] Gesendet: Dienstag, 30. 
November 2004 06:36
An: [EMAIL PROTECTED]
Betreff: [squid-users] Re: URGENT: trouble connecting to certain 
sites...

Evert Meulie wrote:
Hi everyone!
The users of our system are unable to connect to certain sites lately,
when using Squid. With direct access it all runs like a charm.
Trouble-sites are, for example: http://www.wipred.com/
   http://www.nor.no/
   http://www.chessworld.net/
An attempted visit to http://www.nor.no/ generates the following in the
log:
1100731459.899163 192.168.24.24 TCP_MISS/302 443 GET 
http://www.nor.no/ - DIRECT/153.110.204.15 text/html

and that's it...
a visit to http://www.chessworld.net/ generates:
1100731984.998   2553 192.168.24.24 TCP_MISS/302 514 GET 
http://www.chessworld.net/ - DIRECT/217.199.179.4 text/html
1100731988.648   3650 192.168.24.24 TCP_MISS/302 480 GET 
http://www.chessworld.net/chessclubs/asplogin.asp - 
DIRECT/217.199.179.4
text/html
1100731995.255   6607 192.168.24.24 TCP_MISS/302 499 GET 
http://www.chessworld.net/chessclubs/verify.asp? - 
DIRECT/217.199.179.4 text/html
1100731996.824   1568 192.168.24.24 TCP_MISS/302 476 GET 
http://www.chessworld.net/chessclubs/playchess.asp? - 
DIRECT/217.199.179.4 text/html

But in neither case the user gets the actual site on his/her 
screen...   :-/


Update: my users also can not connect to http://www.inpoc.no/





[squid-users] squid to resolve local domain with out seeing /etc/hosts

2004-12-02 Thread Mrs. Geeta Thanu
Hi all,

I have installed squid and is working fine for many days.
Recently we shifted our web server to a IP say 172.16.2.80.

but the system in which squid is running has in its /etc/hosts file

172.16.0.1 webserver

 I want this entry not to be changed becos of some
other purpose.

so now if anybody browse webserver thru squid it is pointing to 172.16.0.1
as webserver.

How can I make squid not to look in to /etc/hosts file and should resolve
the local domains also thru DNS only..

Pls guide me


regds
Geetha



RE: [squid-users] squid to resolve local domain with out seeing /etc/hosts

2004-12-02 Thread Elsen Marc

 
 
 Hi all,
 
 I have installed squid and is working fine for many days.
 Recently we shifted our web server to a IP say 172.16.2.80.
 
 but the system in which squid is running has in its /etc/hosts file
 
 172.16.0.1 webserver
 
  I want this entry not to be changed becos of some
 other purpose.
 
 so now if anybody browse webserver thru squid it is pointing 
 to 172.16.0.1
 as webserver.
 
 How can I make squid not to look in to /etc/hosts file and 
 should resolve
 the local domains also thru DNS only..
 

You can't because squid is designed just for that;
to be able to resolve local domains via /etc/hosts.
You are turning the world, somewhat around.

Make sure that hosts  and or dns names are unique whether
defined in /etc/hosts or not.
Then your problem vanishes.

So the 'other purpose' should be implemented in a more
transparant way on your Intranet.

M.


Re: [squid-users] Re: Reverse Proxy SSL + Certificates

2004-12-02 Thread David Delamarre
- squid version ?
- os/platform/version ?
- which errors are, seen , what was tried ?
- cache.log entries, e.d,
- and so on...
squid 2.5.7 + openssl last version
it is the first time i configure squid. I tried reverse http. it is working

i tried reverse https 
with this configurations 
https_port 443
httpd_accel_host  XXX
httpd_accel_port 443
httpd_accel_with_proxy on
httpd_accel_single_host off


 client ==reverse squid=Server
https https

with this configuration, squid not working. I checked the firewall
there is a connection between the client and squid and no connection
between squid and the server to protect. i want squid completly
transparent for ssl. (the certificate must be transfert to the server)

Thanks




On Thu, 2 Dec 200an4 10:27:38 +0100, Elsen Marc [EMAIL PROTECTED] wrote:
 
 
 
 
 
  Hello
 
  i configured Squid as a reverse proxy for HTTP. all is ok. I tried
  with https and it is not working. I compiled with the option
  enable-ssL ,and  with this version of squid it is not working ... I
  want squid be full transparent for certificate.
 
  no idea ... if someone can help me please
 
 'not working' is an awfull concept in the complex IT area of
 the past Millenium ages :
 
 Clarify  , items interesting :
 
- squid version ?
- os/platform/version ?
- which errors are, seen , what was tried ?
- cache.log entries, e.d,
- and so on...
 
 M.



[squid-users] calamaris

2004-12-02 Thread Mateo Cabrera @ adinet.com.uy
Hi:

I need to make the following report with CALAMARIS 2.99 (beta):

Outgoing request by destination and per user autenticated

Which command or sintaxis i must to use?

Other, how to make graphical reports?, I know that this version support
them.
Thx...in advance.

loop.-



RE: [squid-users] calamaris

2004-12-02 Thread Elsen Marc

 
 
 Hi:
 
 I need to make the following report with CALAMARIS 2.99 (beta):
 
 Outgoing request by destination and per user autenticated
 
 Which command or sintaxis i must to use?
 
 Other, how to make graphical reports?, I know that this 
 version support
 them.
 Thx...in advance.
 
 According to :

   http://cord.de/tools/squid/calamaris/Welcome.html.en

 only V3-beta supports graphics. (...)

 M.


Re: [squid-users] Squid limits and hardware spec

2004-12-02 Thread Martin Marji Cermak
Hello, Guys!
OK, now I have:
  maximum_object_size 200 MB
That means your cache will store up to 200MB of each file. 

You can even store ISO files if your users download Linux ISOs. Just
need to up that 200MB to say 800MB.
Yes, this would be nice if there were just one linux distro and so all 
Linux people would download just, say, 8 Linux images :-)

Setting this to 800 MB would be interesting when you have, say, a cache 
bigger than 500 GB, at least. If I have a cache below 100 GB and I do 
not know too much about the traffic which my users download, the LFUDA 
algorithm does not let an ISO in the cache too long :-)

YOu might also want to change your L1 directories, for a 90GB cache,
only having 16 L1 directories may be overkill.
Thank you, you are right.
At the moment I have
  cache_dir aufs /cache 92000 16 256
because I have 3 x 36GB SCSI disk behind a RAID0 (strip) contoller 
(seems to be just one disk).
Yes, I know, raid is bad, but a RAID 0 controller is the only controller 
I have had since today :-)
So I am going to connect there 3 disks to the new controller (no RAID) 
tomorrow with this settings:

  cache_dir aufs /cache1 3 60 256
  cache_dir aufs /cache2 3 60 256
  cache_dir aufs /cache3 3 60 256
(3 is roughly 80% of the 36 GB disk, so I am right, right?)
I am just sorry about my actual, full 92 GB cache - when I remove the 
three disks from the strip, I will have to reformate them and start with 
empty caches (it took more than 3 days to fill up this 92 GB cache).

The only way how to save some cached data is:
- change my actual cache_dir aufs /cache 92000 16 256
to cache_dir aufs /cache 3 16 256
and start Squid. It removes 62 GB from the cache.
Switch the Squid off. Copy the entire cache to a temporary disk (yes, it 
will take hours, I already tried. Probably better to use tar, without a 
compresion).
Change the controller, format the tree SCSI disks, mount them and copy 
and untar the backuped cache to one of them.

Change configuration - 3x cache_dir. Initialize with 'squid -z'.
Start squid and trada, I have 30 GB cache of data.
Ow, thank you for the L1 algorithm.

Just out of curiousity, what is your cache's filesystem? Ext3? reiserfs?
I had reiserfs (with noatime) but it seemed too slow. I changed it to 
ext3 (noatime), which was supposed to be quicker according to the 
Squid, the definitive guide book, there are benchmarks and ext3 has 
much better throughput.
Finally, I decided my Squid box is going to be stable (good hardware, 
UPS) and decided for ext2 with noatime.

Do you expect to have more _large_ files or more small files? I use
reiserfs. (anticipate more small files caches)
I do not know. I will have to get some stats, somehow. Is this info 
stored somewhere in Squid cachemgr info pages by any chances?
Oh, sorry, you mentioned I can do it by querying the cache. I will have 
a look at it (and post it here with other conclusions :-)


cache_mem 200 MB
How much of memory do yo have??
for a 90GB cache, and assuming 10MB RAM per 1GB cache, you better have
like 900MB RAM
1 GB. I am find, as you can see from my top:
 14:11:52 up 1 day,  5:35,  1 user,  load average: 2.04, 2.17, 2.25
45 processes: 41 sleeping, 4 running, 0 zombie, 0 stopped
CPU states:  25.9% user,  74.1% system,   0.0% nice,   0.0% idle
Mem:   1551284K total,  1520332K used,30952K free,   109912K buffers
Swap:   979956K total, 6200K used,   973756K free,   478308K cached
  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
13742 root  15   0  689M 687M  4020 R90.0 45.4 377:58 squid
15497 root  16   0   940  940   748 R 8.4  0.0   0:02 top
13754 root   9   0  689M 687M  4020 S 0.4 45.4   0:20 squid

So, I am 
supposed to have my Squid in a good shape :-), stable and running 
without stopping/crashing.
The thousands means approx. 3500 users at the moment.
OK.. and they're all accessing 1 cache? Wow.
Yes, but they are not active at the same time. My peaks are:
200 client HTTP requests/sec. Server in: 1.6MB/sec. Client out: 2MB/sec
Have a nice day,
If you post back the results, I sure will.
So have it. I will (post them :-)
Marji