Re: [squid-users] logrotate in squid

2009-10-06 Thread espoire20



espoire20 wrote:
 
 
 
 espoire20 wrote:
 
 I use sarg to generate the report but my disk space come full now so 
 i need to create a logrotate. so like this i can  rotates my logs every
 day and automatically delete the eldest logs. I should configure it to
 rotate my squid logs every week and keep the logs for more than four
 weeks I want to be able to produce monthly reports. Of course, adjust
 this according to the traffic in my  proxy , like this i can have the 
 free disk space in my server.
 
 please you know how i can create the logrotate ??
 
 many thanks 
 
 
 Thank you , please if you can give me more ditaille because i m not
 stronge in squid 
 
 this my crontab
 
 # run-parts
 01 * * * * root run-parts /etc/cron.hourly
 02 4 * * * root run-parts /etc/cron.daily
 22 4 * * 0 root run-parts /etc/cron.weekly
 42 4 1 * * root run-parts /etc/cron.monthly
 
 many thanks 
 

can you give me more details please , I m bloked 
-- 
View this message in context: 
http://www.nabble.com/logrotate-in-squid-tp25728886p25765349.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] squid_ldap_group concurrency

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 00:07 +0200 skrev vincent.blon...@ing.be:
 Hello all,
 
 have somebody already get some experience with squid_ldap_group on squid
 2.7.X because I try to find some info on what reasonable value I can
 define for concurrency

None. This helper only supports the non-concurrent helper protocol.

  and if concurrency can also be used with children

It can, but for helpers not supporting concurrency then children is the
only parameter you can tune. This applies to most helpers as the
concurrent helper protocol is relatively new.

Regards
Henrik



Re: [squid-users] squid counters appear to be wrapping on squid v2.6.18 (old I know)

2009-10-06 Thread Henrik Nordstrom
mån 2009-10-05 klockan 17:09 +0100 skrev Gavin McCullagh:

 we're seeing something odd on squid v2.6.18-1ubuntu3.  I know this is an
 old version and not recommended but I just thought I'd point it out to make
 sure this has been fixed in a more recent version.
 
 After some time running, a couple of squid's pointers appear to be
 wrapping, like signed 32-bit integers.  In particular:
 
   client_http.kbytes_out = -2112947050
 
 We noticed this as we use munin, which queries the counters in this way and
 ignores negative values.  The select_loops value is also negative.  If this
 is fixed in v2.7 that's fair enough but I thought I'd mention it here in
 case it isn't.

Some or most of these have been fixed, at least for the SNMP interface.

Regards
Henrik



[squid-users] Max connections

2009-10-06 Thread Sergio Belkin
Hi,

squid.conf says about maxconn:
This will be matched when the client's IP address has
 more than number HTTP connections established.

OK, that's if we have only one IP with we want to limit.

What if I have an acls such like this:

acl max_conn_vlan2 maxconn 100

acl vlan2   src   192.168.139.128/255.255.255.128


And then:

http_access deny vlan2 max_conn_vlan2

Does this limit each IP of the range up to 100 connections or the
whole range is limited up to 100?

Thanks in advance!

-- 
--
Open Kairos http://www.openkairos.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


[squid-users] Squid ftp authentication popup

2009-10-06 Thread uxmax
Hello all,

my problem is that squid does not send auth dialog box back to the client 
(sender/browser) ie*, firefox etc.

simple example 

http://upload.ftpserver.com  (auth popup appears)


ftp://upload.ftpserver.com (popup does not appear)


squid version = 2.6 stable21 (os: centos 5) same problem qith squid ported 
windows version 2.6



Should i try a new version like 2.7 or 3.* ? 


thank you, uxmax


[squid-users] UPDATE: Squid ftp authentication popup

2009-10-06 Thread uxmax


Hello all,

my problem is that squid does not send auth dialog box back to the client 
(sender/browser) ie*, firefox etc.

simple example 

http://upload.ftpserver.com  (auth popup appears)

ftp://upload.ftpserver.com (popup does not appear)

the authentication popup does not appear but this error message, most likely 
the brower tried an anonymous connection
---snip---snip---snip---snip---snip---snip---snip---snip
(ERROR
The requested URL could not be retrieved



An FTP authentication failure occurred while trying to retrieve the URL: 
ftp://upload.dunkel.de/ 

Squid sent the following FTP command: 

PASS yourpasswordand then received this reply 
Login incorrect.Your cache administrator is root. 
---snip---snip---snip---snip---snip---snip---snip---snip


squid version = 2.6 stable21 (os: centos 5) same problem qith squid ported 
windows version 2.6

Is this a conformed bug? Should i try a new version like 2.7 or 3.* ? 


thank you, uxmax



[squid-users] Stop account sharing?

2009-10-06 Thread skinnyzaz

Is there a way to stop 2 users from using the same account at the same time
while behinde the same router? So to clarify, they will both have the same
external IP address. I am using AD to authenticate but am going to be
switching over to openldap soon.
-- 
View this message in context: 
http://www.nabble.com/Stop-account-sharing--tp25773504p25773504.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] New Admin

2009-10-06 Thread Ross Kovelman
Hello,

I am setting up squid for the 1st time and so far so good.  I am trying to
deny access to some sites like myspace, facebook, etc.  My config for this
is as follows:

acl bad_url dstdomain /xxx//etc/bad-sites.squid
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl workdays time M T W H F 8:30-18:00
http_access allow workdays
http_access allow all
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT


In that file (bad-sites.squid) I then have sites listed as such:

.facebook.com
.fanfiction.net
.meebo.com
.myspace.com
.playboy.com



For some odd reason it is not blocking them.  Any help would be appreciated!



smime.p7s
Description: S/MIME cryptographic signature


[squid-users] My sarg broke

2009-10-06 Thread ant2ne

I like Webmin and Sarg. Something recently has broken it. I don't care if I
loose all old logs, but I need to generate new ones.

Here is the error I get
Now generating Sarg report from Squid log file /var/log/squid/access.log and
all rotated versions ..
sarg -l /var/log/squid/access.log.1 -d 05/10/2009-06/10/2009
SARG: No records found
SARG: End
sarg -l /var/log/squid/access.log.2.gz -d 05/10/2009-06/10/2009
SARG: Decompressing log file: /var/log/squid/access.log.2.gz 
/tmp/sarg-file.in (zcat)
SARG: No records found
SARG: End
.. Sarg finished, but no report was generated. See the output above for
details.

Here is some Back Story, along with my current squid.conf
http://www.nabble.com/not-caching-enough-td25530445.html#a25553196
-- 
View this message in context: 
http://www.nabble.com/My-sarg-broke-tp25775972p25775972.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Max connections

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 10:46 -0300 skrev Sergio Belkin:

 Does this limit each IP of the range up to 100 connections or the
 whole range is limited up to 100?

Each.

Regards
Henrik



Re: [squid-users] Squid ftp authentication popup

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 16:20 +0200 skrev ux...@enquid.net:
 Hello all,
 
 my problem is that squid does not send auth dialog box back to the client 
 (sender/browser) ie*, firefox etc.
 
 simple example 
 
 http://upload.ftpserver.com  (auth popup appears)
 
 
 ftp://upload.ftpserver.com (popup does not appear)

That URL is by definition anonymous FTP.

To access non-anonymous FTP you need to use

 ftp://user:passw...@ftp.example.com/

With some browsers you can leave out the :password part and Squid will
prompt the browser for login credentials, but some of the main browsers
do not support this.

Regards
Henrik





Re: [squid-users] My sarg broke

2009-10-06 Thread Augusto Casagrande
Hi,
 aparently its an error of Webmin. I've a similar problem Sarg 2.2.5
on Squid 3.0.STABLE19 and Webmin 1.490.
I've reported the error in a Webmin's forum, and it's maybe a Webmin
error. It will be fixed in the next (1.500) release.

Augusto

2009/10/6 ant2ne tcy...@altonschools.org:

 I like Webmin and Sarg. Something recently has broken it. I don't care if I
 loose all old logs, but I need to generate new ones.

 Here is the error I get
 Now generating Sarg report from Squid log file /var/log/squid/access.log and
 all rotated versions ..
 sarg -l /var/log/squid/access.log.1 -d 05/10/2009-06/10/2009
 SARG: No records found
 SARG: End
 sarg -l /var/log/squid/access.log.2.gz -d 05/10/2009-06/10/2009
 SARG: Decompressing log file: /var/log/squid/access.log.2.gz 
 /tmp/sarg-file.in (zcat)
 SARG: No records found
 SARG: End
 .. Sarg finished, but no report was generated. See the output above for
 details.

 Here is some Back Story, along with my current squid.conf
 http://www.nabble.com/not-caching-enough-td25530445.html#a25553196
 --
 View this message in context: 
 http://www.nabble.com/My-sarg-broke-tp25775972p25775972.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Stop account sharing?

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 11:02 -0700 skrev skinnyzaz:
 Is there a way to stop 2 users from using the same account at the same time
 while behinde the same router? So to clarify, they will both have the same
 external IP address. I am using AD to authenticate but am going to be
 switching over to openldap soon.

Quite hard when they are also sharing the same IP.

You can do things like restricting to a single user-agent string, but
that's quite evil and do break many cases of perfectly valid use. (i.e.
running both MSIE and Firefox, various automatic software update agents,
etc etc...)

Regards
Henrik



Re: [squid-users] New Admin

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 14:53 -0400 skrev Ross Kovelman:

 
 For some odd reason it is not blocking them.  Any help would be appreciated!

You need to deny access based on that acl in http_access, somewhere
before where you allow access.

Regards
Henrik



Re: [squid-users] Yahoo doesn't load properly through squid

2009-10-06 Thread Matus UHLAR - fantomas
On 29.09.09 18:01, Avinash Rao wrote:
 I am using squid 2.6 stable18 on Ubuntu 8.04 server - 64-bit edition.
 I noticed a strange problem with squid recently, yahoo.com page
 doesn't load completely through squid proxy.

 acl videos dstdomain .youtube.com .yimg.com .orkut.com .sex.com
 .teen.com .adult.com .mp3.com

 http_access deny videos

why do you wonder? many/most of yahoo images are placed on yimg.com...
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Depression is merely anger without enthusiasm. 


Re: [squid-users] What does --enable-ntlm-fail-open do?

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 12:38 +1100 skrev Daniel Rose:

 I've been hunting, but I can't find any extra info on the  
 --enable-ntlm-fail-open configure argument.

Checking...

it's not really in use today. That parameter changes the old smb_ntlm
helper to indicate success instead of failure if it for some reason
fails to contact the configured authentication server.

But that helper is not recommended to be used as it's very unstable to
begin with, and the method it uses for verifying the user credentials do
not scale well.

 What needs to be setup in the squid.conf to enable this behaviour?

Nothing.

Regards
Henrik



[squid-users] Uncheable object in squid

2009-10-06 Thread Hardeep Uppal

I am using squid2.6-Stable22 on fedora 8 and i am trying to find
objects that are uncacheable in squid. Is there a way to make
access.log file give information about objects that squid will not cache. I 
need to find number of
uncacheable object (dynamic, https, no-cache) which cause TCP_MISS.

  
_
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/177141665/direct/01/

Re: [squid-users] New Admin

2009-10-06 Thread Ross Kovelman
 From: Henrik Nordstrom hen...@henriknordstrom.net
 Date: Tue, 06 Oct 2009 22:46:25 +0200
 To: Ross Kovelman rkovel...@gruskingroup.com
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] New Admin
 
 tis 2009-10-06 klockan 14:53 -0400 skrev Ross Kovelman:
 
 
 For some odd reason it is not blocking them.  Any help would be appreciated!
 
 You need to deny access based on that acl in http_access, somewhere
 before where you allow access.
 
 Regards
 Henrik
 

This is what I have for http_access:

http_access deny bad_url
http_access deny all bad_url
http_access deny manager
http_access allow manager localhost
http_access allow workdays
http_access allow our_networks


I would think bad_url would do the trick since I have acl bad_url dstdomain,
correct?


Thanks


smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Squid ftp authentication popup

2009-10-06 Thread Amos Jeffries
On Tue, 06 Oct 2009 22:43:43 +0200, Henrik Nordstrom
hen...@henriknordstrom.net wrote:
 tis 2009-10-06 klockan 16:20 +0200 skrev ux...@enquid.net:
 Hello all,
 
 my problem is that squid does not send auth dialog box back to the
client
 (sender/browser) ie*, firefox etc.
 
 simple example 
 
 http://upload.ftpserver.com  (auth popup appears)
 
 
 ftp://upload.ftpserver.com (popup does not appear)
 
 That URL is by definition anonymous FTP.
 
 To access non-anonymous FTP you need to use
 
  ftp://user:passw...@ftp.example.com/
 
 With some browsers you can leave out the :password part and Squid will
 prompt the browser for login credentials, but some of the main browsers
 do not support this.
 
 Regards
 Henrik

Firefox-3.x wil happyily popup the ftp:// auth dialog if the proxy-auth
header is sent.
There were a few bugs which got fixed in the 3.1 re-writes and made squid
start to send it properly. It's broken in 3.0, not sure if its the same in
2.x but would assume so. The fixes done rely on C++ objects so wont be easy
to port.

Amos



Re: [squid-users] New Admin

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 16:55 -0400 skrev Ross Kovelman:

 This is what I have for http_access:
 
 http_access deny bad_url
 http_access deny all bad_url
 http_access deny manager
 http_access allow manager localhost
 http_access allow workdays
 http_access allow our_networks
 
 
 I would think bad_url would do the trick since I have acl bad_url dstdomain,
 correct?

It should. At least assuming you have not other http_access rules above
this.

but the rest of those rules looks strange.

I think you want something like:

# Restrict cachemgr access
http_access allow manager localhost
http_access deny manager

# Block access to banned URLs
http_access deny bad_url

# Allow users access on workdays
http_access allow our_networks workdays

# Deny everything else
http_access deny all


but have no description of what effect workdays is supposed to have...


Regards
Henrik




Re: [squid-users] Uncheable object in squid

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 20:52 + skrev Hardeep Uppal:
 I am using squid2.6-Stable22 on fedora 8 and i am trying to find
 objects that are uncacheable in squid. Is there a way to make
 access.log file give information about objects that squid will not cache. I 
 need to find number of
 uncacheable object (dynamic, https, no-cache) which cause TCP_MISS.

store.log is probably a better place to look. Objects there with a 200
status code and  as file number are likely candidates..

There may be other reasons why an object do not get cached such as
temporary failures, aborted requests etc, but this will at least narrow
down your serach considerably.

Regards
Henrik




Re: [squid-users] Https traffic

2009-10-06 Thread Henrik Nordstrom
mån 2009-10-05 klockan 11:34 +0200 skrev ivan.ga...@aciglobal.it:
 Hi, 
 my company are going to buy Websense web security suite. 
 It seems to be able to decrypt and check contents in ssl tunnel. 
 Is it really important to do this to prevent malicius code or dangerous 
 threat?

Any product doing this will require full administrative control over the
clients as already explained in this thread, This is required to crack
the SSL security layer wide open to the proxy.

Why one wants to do this varies a lot, But most reasons that comes to
mind here is not about filtering or malicious code... more in the area
of being able to inspect what leaves the company, reliable audit trails
of who did what etc...

Anyway, keep in mind that cracking SSL like this is not without effects,
for example many serious online banking solutions will fail miserably if
subjected to this simply because the connection can no longer provide
the required SSL end-to-end security features.

Regards
Henrik



Re: [squid-users] New Admin

2009-10-06 Thread Ross Kovelman
Thanks, I made those changes although still no luck.  I do save the changes
and then run a ./squid -k reconfigure, not sure if I should run a different
command.  

I do have this for work days:
acl workdays time M T W H F 8:30-18:00

If I can I would like to deny those sites during workdays and then its
open before or after that time.

Thanks


 From: Henrik Nordstrom hen...@henriknordstrom.net
 Date: Tue, 06 Oct 2009 23:29:02 +0200
 To: Ross Kovelman rkovel...@gruskingroup.com
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] New Admin
 
 tis 2009-10-06 klockan 16:55 -0400 skrev Ross Kovelman:
 
 This is what I have for http_access:
 
 http_access deny bad_url
 http_access deny all bad_url
 http_access deny manager
 http_access allow manager localhost
 http_access allow workdays
 http_access allow our_networks
 
 
 I would think bad_url would do the trick since I have acl bad_url dstdomain,
 correct?
 
 It should. At least assuming you have not other http_access rules above
 this.
 
 but the rest of those rules looks strange.
 
 I think you want something like:
 
 # Restrict cachemgr access
 http_access allow manager localhost
 http_access deny manager
 
 # Block access to banned URLs
 http_access deny bad_url
 
 # Allow users access on workdays
 http_access allow our_networks workdays
 
 # Deny everything else
 http_access deny all
 
 
 but have no description of what effect workdays is supposed to have...
 
 
 Regards
 Henrik
 
 


smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] New Admin

2009-10-06 Thread Ross Kovelman

 From: Henrik Nordstrom hen...@henriknordstrom.net
 Date: Tue, 06 Oct 2009 23:29:02 +0200
 To: Ross Kovelman rkovel...@gruskingroup.com
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] New Admin
 
 tis 2009-10-06 klockan 16:55 -0400 skrev Ross Kovelman:
 
 This is what I have for http_access:
 
 http_access deny bad_url
 http_access deny all bad_url
 http_access deny manager
 http_access allow manager localhost
 http_access allow workdays
 http_access allow our_networks
 
 
 I would think bad_url would do the trick since I have acl bad_url dstdomain,
 correct?
 
 It should. At least assuming you have not other http_access rules above
 this.
 
 but the rest of those rules looks strange.
 
 I think you want something like:
 
 # Restrict cachemgr access
 http_access allow manager localhost
 http_access deny manager
 
 # Block access to banned URLs
 http_access deny bad_url
 
 # Allow users access on workdays
 http_access allow our_networks workdays
 
 # Deny everything else
 http_access deny all
 
 
 but have no description of what effect workdays is supposed to have...
 
 
 Regards
 Henrik
 
 


I made a few changes and still nothing:

acl bad_url dstdomain /xxx//etc/bad-sites.squid
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl our_networks src 192.168.16.0/255.255.255.0
acl to_localhost dst 127.0.0.0/8
acl workdays time M T W H F 8:30-12:00 11:30-18:00
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

# Restrict cachemgr access
http_access allow manager localhost
http_access deny manager

# Block access to banned URLs
http_access deny bad_url workdays

# Allow users access on workdays
http_access allow our_networks workdays

# Deny everything else
http_access deny all

I would think this would fulfill the request I just emailed to the group,
but doesn't



 Thanks, I made those changes although still no luck.  I do save the
changes
and then run a ./squid -k reconfigure, not sure if I should run a different
command.  

I do have this for work days:
acl workdays time M T W H F 8:30-18:00

If I can I would like to deny those sites during workdays and then its
open before or after that time.

Thanks


smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Re: Re[squid-users] verse Proxy, sporadic TCP_MISS

2009-10-06 Thread Henrik Nordstrom
mån 2009-10-05 klockan 08:10 -0700 skrev tookers:

 Hi Henrik,
 Thanks for your reply. I'm getting TCP_MISS/200 for these particular
 requests so the file exists on the back-end,

Are you positively sure you got that on the first one? Not easy to tell
unless there is absolutely no other status code reported in the logs for
that URL. The access.log entry for the first may well be long after the
crowd.

 Squid seems unable to store the
 object in cache (quite possible due to a lack of free fd's), or possibly due
 to the high traffic volume.

Yes, both may cause Squid to not cache an object on disk. cache.log
should give indications if fd's are the problem, and also in most cases
when I/O load are the issue..

But neither lack of fds or high I/O load prevents Squid from handling
the object as cachable. That is not dependent on being able to store the
object on disk. But maybe there is glitches in the logic there... how
collapsed_forwarding behaves on cachable objects which Squid decides
should not be cached for some reason have not been tested.

 Is there any way to control the 'storm' of requests? I.e. Possibly force the
 object to cache (regardless of pragma:no-cache etc) or have some sort of
 timer / sleeper function to allow only a small number of requests, for a
 particular request, to goto the backend?

It's a tricky balance trying to address the problem from that angle.

Forcing caching of otherwise uncachable objects is one thing. This you
can do via refresh_pattern based overrides, but best way it so convince
the server to properly say that the object may be cached... but from
your description that's not the problem here.

The problem with timer/sleeper is that you then may end up with a very
long work queue of pending clients if the object never gets cachable..

Regards
Henrik



[squid-users] Odd behaivoir

2009-10-06 Thread Luis Daniel Lucio Quiroz
Hi Squids,

Using Squid 3.0.19 we're having problems adding one more ACL.  Our 
configuration has about 2000 http_access and about 2500 acl's.  

Now, adding one more acl, or even modifying a file pointed by an acl such as 
acl myacl dstregexp myfile, our squid is slowing to much.  

Symtopms:
- squid -k pharse, OK
- squid -k reconfigure, squid slows.  cache.log says squid is reloading but it 
is too slow, squid process begins to uses about 99% of cpu. No dying message 
at log.

I wonder to know if there is a maximun in squid acl, https, regexp.

TIA
LD


Re: [squid-users] Querying cache

2009-10-06 Thread Miguel Cruz
Thanks for the info.  I've found some info on the net about setting up
squid for snmp but it seems everything I've found is outdated.

If anyone can point me to a good resource to read about this.  I've
read about using the mib.txt but still not sure how to query it.

I've created acls on squid .conf:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80# http
acl CONNECT method CONNECT
acl snmpaccess snmp_community MYCOMMUNITYSTRING
 later on...
snmp_access allow snmpaccess localhost

I think the mib.txt file can give me what i'm looking for after seeing
this part of it:

cacheNumObjCount OBJECT-TYPE
SYNTAX Gauge32
MAX-ACCESS read-only
STATUS  current
DESCRIPTION
 Number of objects stored by the cache 
::= { cacheSysPerf 7 }

To explain my setup better, I have (lets say) 20 squid_servers and I
want to be able to query them to see how many files they have in their
cache (since they are caching the content of an http server that
serves plain text files) from 1 server that sits on the same network.

Hope this helps
Thanks



On Mon, Oct 5, 2009 at 6:06 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Mon, 5 Oct 2009 16:33:10 -0400, Miguel Cruz toky.c...@gmail.com wrote:
 Hello all,

 I would like to know if there is a way to query squid for the total
 amount of files that it has in its cache.

 Reason is we are using squid in http_accell mode and if I do a wget on
 / I can get a listing of all the files that reside on the docroot
 and all the directories that are there but not of the files inside the
 directories.  So if I was to massage the index.html to clear the
 http tags I could get a list that I can count and use to do another
 wget into the directories I find but this seems over-engineered.
 Instead of getting the required data in 1 connection I would have to
 connect multiple times.

 This is part of my squid.conf file:

 httpd_accel_host 10.x.x.x
 httpd_accel_port 80
 httpd_accel_single_host on
 httpd_accel_with_proxy off
 httpd_accel_uses_host_header off

 Thanks in advanced
 Miguel

 To get the number of files in squids cache use SNMP and via the cachemgr
 interface (squidclient mgr:info or cachemgr.cgi).

 However the problem you describe with / URL providing a list of files is
 not related to Squid at all. / URL is requested from the web server like
 any other. It's the same URL browsers fetch when only a domain name is
 entered in the address bar.

 The results you get back are created by the web server. You describe a
 directory listing files stored on the web server. It might be that none of
 the files listed there are cacheable and stored in squid at all.


 Please seriously consider upgrading your squid though. You will find any
 release 2.6 or higher to be much better for reverse-proxy usage. In speed,
 capability, and ease of configuration.


 Amos



Re: [squid-users] Odd behaivoir

2009-10-06 Thread Amos Jeffries
On Tue, 6 Oct 2009 17:57:24 -0500, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
 Hi Squids,
 
 Using Squid 3.0.19 we're having problems adding one more ACL.  Our 
 configuration has about 2000 http_access and about 2500 acl's.  
 
 Now, adding one more acl, or even modifying a file pointed by an acl such
 as 
 acl myacl dstregexp myfile, our squid is slowing to much.  
 
 Symtopms:
 - squid -k pharse, OK
 - squid -k reconfigure, squid slows.  cache.log says squid is reloading
but
 it 
 is too slow, squid process begins to uses about 99% of cpu. No dying
 message 
 at log.
 
 I wonder to know if there is a maximun in squid acl, https, regexp.

No defined limits as such. It's just long lists/trees that need to be
walked over individually on each use and built on reconfigure.

The http_access list get walked once per request. Each ACL (mostly trees,
some lists) get walked once per test. How fast or slow really depends on
what types of ACL and what order you place them in http_access.

Why do you have so many?

Amos



Re: [squid-users] Querying cache

2009-10-06 Thread Henrik Nordstrom
tis 2009-10-06 klockan 19:32 -0400 skrev Miguel Cruz:
 Thanks for the info.  I've found some info on the net about setting up
 squid for snmp but it seems everything I've found is outdated.

Not much have changed.

The Squid SNMP agent runs on the snmp_port configured in squid.conf and
support SNMP v1  v2c. SNMPv3 is not understood.

mib.txt is the Squid MIB. Install this as SQUID_MIB in yout SNMP mib
repository for ease of access, or give it to your snmp commands as the
mib to use by path to mib.txt.

For access to be allowed snmp_access needs to evaluate to allow. In
snmp_access you MAY use the snmp_community or src type acls, and maybe
some other as well like myip..

example using net-snmp:

   snmpget -v 2c -c public -m share/mib.txt localhost:3401 cacheNumObjCount

Regards
Henrik



RE: [squid-users] Squid/LDAP re-challenges browser on http_access deny

2009-10-06 Thread Dion Beauglehall
Hi,

I am now having issues with custom error pages,

I have the deny_info line for the accessdeny acl, but it isn't getting used (I 
assume because the access deny line finished with all). Eg:

deny_info ERR_ACCESS_DENIED_MISUSE accessdenied
http_access deny accessdenied all

I have tried removing the all, but that puts me back into a re-challenge loop 
(which is why all was included).

I am hoping to have a list of denied messages which give instructions to the 
user on the steps required to fix the issue, depending on what reason they were 
denied for.  Is there any suggestions someone can offer, or is there relevant 
variables (eg. The acl which denied them) which can be passed to an external 
handler?  I'd rather do it with static ERR pages, but whatever works!

Regards,
Dion




-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, 14 September 2009 12:20 PM
To: Dion Beauglehall
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid/LDAP re-challenges browser on http_access deny

On Mon, 14 Sep 2009 12:12:27 +1000, Dion Beauglehall
beaugleha...@vermontsc.vic.edu.au wrote:
 Hi Amos,
 
 The changes you suggested worked perfectly.  Thankyou.  What I'm not
quite
 sure of is why.  I assume in this context, the all at the end of the
line
 is not acting as a user list, but a URL list or something else?

It's an IP-based test doing a very fast catch-all.  This changing the type
of ACL last seen at denial so Squid does not equate the deny with unusable
credentials and re-challenge.

Amos

 
 Regards,
 Dion
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Thursday, 10 September 2009 11:30 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid/LDAP re-challenges browser on
http_access
 deny
 
 On Thu, 10 Sep 2009 10:55:58 +1000, Dion Beauglehall
 beaugleha...@vermontsc.vic.edu.au wrote:
 Hi,
 
 I’m configuring a squid proxy box with LDAP authentication, and ACLs
 based
 on LDAP groups.  I have the LDAP authentication working, as are groups.
 
 However, when I add a user to an “Access Denied” group, squid then
causes
 the browser to bring up a authentication dialog box.  Most squid
installs
 I
 have seen bring up a squid “Cache Access Denied” screen at this point.
 This is what I would like it to do.
 
 I am unsure if what I am experiencing is expected behaviour, or whether
I
 have an error in my config file.
 
 I am running Squid 2.7STABLE6 on a Windows 2008 server.  Relevant lines
 from squid.conf are below.  Note that the LDAP works correctly, and so I
 have not provided details.  What is not acting as I expected is the
 behaviour of Squid when it hits the “http_access deny accessdenied”
line.
 
 This seems to be what re-challenges the browser.  
 
 As we are a school, we need to ensure that both the user is a valid user
 (from the initial challenge, which collects their machine login,
 invisible
 to the user), and that they have not been denied for some reason (hence
 the
 denied group).  The re-challenge will lead to students logging into
squid
 with their friends account.  A Cache Access Denied screen is a much
 better
 alternative.
 
 Yes it was a config issue.
 Re-writing your ACLs slightly to follow that exact logic as described
above
 should solve your problem.
 
 
 Note that once I have this working, there will be other “denied” groups
 to
 deny on, prior to allowing access.
 
 Any suggestions or ideas are appreciated.
 
 Regards,
 Dion
 
 
 auth_param basic program c:/squid/libexec/squid_ldap_auth.exe ..
 auth_param basic children 5
 auth_param basic realm VSC
 auth_param basic credentialsttl 5 minutes
 
 external_acl_type ldapgroup LOGIN ..
 
 acl ldap-auth proxy_auth REQUIRED
 
 acl accessdenied external ldapgroup InternetAccessDeny
 acl accessallowed external ldapgroup InternetAccess
 
 http_access deny accessdenied
 
 Change the above line to:
 http_access deny accessdenied all
 
 ... which will produce the Access Denied page instead of a challenge.
 
 Any other denied groups need to go in here one to a line with all at
the
 end of each line.
 
 
 After all them add a new line:
 http_access deny !ldap-auth
 
 ... which will cause Squid to challenge if no credentials are given yet.
 People who have given _any_ valid credentials will not be asked twice.
 This action was being done as side-effect of the accessdenied ACL test,
but
 with the new version it needs to be done separately.
 
 
 http_access allow accessallowed
 http_access deny all
 
 
 Amos
 
 --- Scanned by M+ Guardian Messaging Firewall ---

--- Scanned by M+ Guardian Messaging Firewall ---






RE: [squid-users] Squid/LDAP re-challenges browser on http_access deny

2009-10-06 Thread Amos Jeffries
On Wed, 7 Oct 2009 14:23:45 +1100, Dion Beauglehall
beaugleha...@vermontsc.vic.edu.au wrote:
 Hi,
 
 I am now having issues with custom error pages,
 
 I have the deny_info line for the accessdeny acl, but it isn't getting
used
 (I assume because the access deny line finished with all). Eg:
 
 deny_info ERR_ACCESS_DENIED_MISUSE accessdenied
 http_access deny accessdenied all
 
 I have tried removing the all, but that puts me back into a
re-challenge
 loop (which is why all was included).
 
 I am hoping to have a list of denied messages which give instructions to
 the user on the steps required to fix the issue, depending on what reason
 they were denied for.  Is there any suggestions someone can offer, or is
 there relevant variables (eg. The acl which denied them) which can be
 passed to an external handler?  I'd rather do it with static ERR pages,
but
 whatever works!

Magic voodoo:

acl dummy src all
deny_info ERR_ACCESS_DENIED_MISUSE dummy
http_access deny accessdenied dummy


See how it works? ;)

Amos


 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Monday, 14 September 2009 12:20 PM
 To: Dion Beauglehall
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid/LDAP re-challenges browser on
http_access
 deny
 
 On Mon, 14 Sep 2009 12:12:27 +1000, Dion Beauglehall
 beaugleha...@vermontsc.vic.edu.au wrote:
 Hi Amos,
 
 The changes you suggested worked perfectly.  Thankyou.  What I'm not
 quite
 sure of is why.  I assume in this context, the all at the end of the
 line
 is not acting as a user list, but a URL list or something else?
 
 It's an IP-based test doing a very fast catch-all.  This changing the
type
 of ACL last seen at denial so Squid does not equate the deny with
unusable
 credentials and re-challenge.
 
 Amos
 
 
 Regards,
 Dion
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Thursday, 10 September 2009 11:30 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid/LDAP re-challenges browser on
 http_access
 deny
 
 On Thu, 10 Sep 2009 10:55:58 +1000, Dion Beauglehall
 beaugleha...@vermontsc.vic.edu.au wrote:
 Hi,
 
 I’m configuring a squid proxy box with LDAP authentication, and ACLs
 based
 on LDAP groups.  I have the LDAP authentication working, as are groups.
 
 However, when I add a user to an “Access Denied” group, squid then
 causes
 the browser to bring up a authentication dialog box.  Most squid
 installs
 I
 have seen bring up a squid “Cache Access Denied” screen at this point.
 This is what I would like it to do.
 
 I am unsure if what I am experiencing is expected behaviour, or whether
 I
 have an error in my config file.
 
 I am running Squid 2.7STABLE6 on a Windows 2008 server.  Relevant lines
 from squid.conf are below.  Note that the LDAP works correctly, and so
I
 have not provided details.  What is not acting as I expected is the
 behaviour of Squid when it hits the “http_access deny accessdenied”
 line.
 
 This seems to be what re-challenges the browser.  
 
 As we are a school, we need to ensure that both the user is a valid
user
 (from the initial challenge, which collects their machine login,
 invisible
 to the user), and that they have not been denied for some reason (hence
 the
 denied group).  The re-challenge will lead to students logging into
 squid
 with their friends account.  A Cache Access Denied screen is a much
 better
 alternative.
 
 Yes it was a config issue.
 Re-writing your ACLs slightly to follow that exact logic as described
 above
 should solve your problem.
 
 
 Note that once I have this working, there will be other “denied” groups
 to
 deny on, prior to allowing access.
 
 Any suggestions or ideas are appreciated.
 
 Regards,
 Dion
 
 
 auth_param basic program c:/squid/libexec/squid_ldap_auth.exe ..
 auth_param basic children 5
 auth_param basic realm VSC
 auth_param basic credentialsttl 5 minutes
 
 external_acl_type ldapgroup LOGIN ..
 
 acl ldap-auth proxy_auth REQUIRED
 
 acl accessdenied external ldapgroup InternetAccessDeny
 acl accessallowed external ldapgroup InternetAccess
 
 http_access deny accessdenied
 
 Change the above line to:
 http_access deny accessdenied all
 
 ... which will produce the Access Denied page instead of a challenge.
 
 Any other denied groups need to go in here one to a line with all at
 the
 end of each line.
 
 
 After all them add a new line:
 http_access deny !ldap-auth
 
 ... which will cause Squid to challenge if no credentials are given yet.
 People who have given _any_ valid credentials will not be asked twice.
 This action was being done as side-effect of the accessdenied ACL test,
 but
 with the new version it needs to be done separately.
 
 
 http_access allow accessallowed
 http_access deny all
 
 
 Amos
 
 --- Scanned by M+ Guardian Messaging Firewall ---
 
 --- Scanned by M+ Guardian Messaging Firewall ---


[squid-users] website accessible on one proxy but not through another

2009-10-06 Thread goody goody
Hi all,

I am running (squid/2.5.STABLE10) on freebsd.

I am running two different proxy server for different  LANS, but users 
experiencing problem while visit below site on one proxy whereas the same site 
is accessible on another proxy. so please guide what could be the possible 
reason.

I have tried to purge the cache but this object is not in the cache (404 error 
returned).

Regards,
.Goody.

ERROR
The requested URL could not be retrieved



While trying to retrieve the URL: http://www.swift.com/about_swift/index.page? 

The following error was encountered: 

Connection Failed 
The system returned: 

(13) Permission denied

The remote host or network may be down. Please try the request again






  


Re: [squid-users] Odd behaivoir

2009-10-06 Thread Luis Daniel Lucio Quiroz
Le mardi 6 octobre 2009 18:57:34, Amos Jeffries a écrit :
 On Tue, 6 Oct 2009 17:57:24 -0500, Luis Daniel Lucio Quiroz
 
 luis.daniel.lu...@gmail.com wrote:
  Hi Squids,
 
  Using Squid 3.0.19 we're having problems adding one more ACL.  Our
  configuration has about 2000 http_access and about 2500 acl's.
 
  Now, adding one more acl, or even modifying a file pointed by an acl such
  as
  acl myacl dstregexp myfile, our squid is slowing to much.
 
  Symtopms:
  - squid -k pharse, OK
  - squid -k reconfigure, squid slows.  cache.log says squid is reloading
 
 but
 
  it
  is too slow, squid process begins to uses about 99% of cpu. No dying
  message
  at log.
 
  I wonder to know if there is a maximun in squid acl, https, regexp.
 
 No defined limits as such. It's just long lists/trees that need to be
 walked over individually on each use and built on reconfigure.
 
 The http_access list get walked once per request. Each ACL (mostly trees,
 some lists) get walked once per test. How fast or slow really depends on
 what types of ACL and what order you place them in http_access.
 
 Why do you have so many?
 
 Amos
 
So many, 
see my debug here http://pastebin.com/f197606fa
from lines 58-62, helpers delay too much.  This is not a little machine, it is 
8 cpus amd @3.2Ghz, 64 bits, with 64 GB in RAM.
If we remove any ACL we've added, this does not occur.


RE: [squid-users] Squid/LDAP re-challenges browser on http_access deny

2009-10-06 Thread Dion Beauglehall
HI,

It works fine for the straight deny, but I have one acl (from an external 
helper) which has been designed to be used as an allow list, which (of course), 
I want to use as a deny. Putting

deny !papercutallow dummy


Seems to just hang squid.

Thoughts? Suggestions?

In the meantime, I've contacted papercut about whether the external helper can 
work as a deny group...

Regards,
Dion


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, 7 October 2009 2:53 PM
To: Dion Beauglehall
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Squid/LDAP re-challenges browser on http_access deny

On Wed, 7 Oct 2009 14:23:45 +1100, Dion Beauglehall
beaugleha...@vermontsc.vic.edu.au wrote:
 Hi,
 
 I am now having issues with custom error pages,
 
 I have the deny_info line for the accessdeny acl, but it isn't getting
used
 (I assume because the access deny line finished with all). Eg:
 
 deny_info ERR_ACCESS_DENIED_MISUSE accessdenied
 http_access deny accessdenied all
 
 I have tried removing the all, but that puts me back into a
re-challenge
 loop (which is why all was included).
 
 I am hoping to have a list of denied messages which give instructions to
 the user on the steps required to fix the issue, depending on what reason
 they were denied for.  Is there any suggestions someone can offer, or is
 there relevant variables (eg. The acl which denied them) which can be
 passed to an external handler?  I'd rather do it with static ERR pages,
but
 whatever works!

Magic voodoo:

acl dummy src all
deny_info ERR_ACCESS_DENIED_MISUSE dummy
http_access deny accessdenied dummy


See how it works? ;)

Amos


 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Monday, 14 September 2009 12:20 PM
 To: Dion Beauglehall
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] Squid/LDAP re-challenges browser on
http_access
 deny
 
 On Mon, 14 Sep 2009 12:12:27 +1000, Dion Beauglehall
 beaugleha...@vermontsc.vic.edu.au wrote:
 Hi Amos,
 
 The changes you suggested worked perfectly.  Thankyou.  What I'm not
 quite
 sure of is why.  I assume in this context, the all at the end of the
 line
 is not acting as a user list, but a URL list or something else?
 
 It's an IP-based test doing a very fast catch-all.  This changing the
type
 of ACL last seen at denial so Squid does not equate the deny with
unusable
 credentials and re-challenge.
 
 Amos
 
 
 Regards,
 Dion
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Thursday, 10 September 2009 11:30 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Squid/LDAP re-challenges browser on
 http_access
 deny
 
 On Thu, 10 Sep 2009 10:55:58 +1000, Dion Beauglehall
 beaugleha...@vermontsc.vic.edu.au wrote:
 Hi,
 
 I’m configuring a squid proxy box with LDAP authentication, and ACLs
 based
 on LDAP groups.  I have the LDAP authentication working, as are groups.
 
 However, when I add a user to an “Access Denied” group, squid then
 causes
 the browser to bring up a authentication dialog box.  Most squid
 installs
 I
 have seen bring up a squid “Cache Access Denied” screen at this point.
 This is what I would like it to do.
 
 I am unsure if what I am experiencing is expected behaviour, or whether
 I
 have an error in my config file.
 
 I am running Squid 2.7STABLE6 on a Windows 2008 server.  Relevant lines
 from squid.conf are below.  Note that the LDAP works correctly, and so
I
 have not provided details.  What is not acting as I expected is the
 behaviour of Squid when it hits the “http_access deny accessdenied”
 line.
 
 This seems to be what re-challenges the browser.  
 
 As we are a school, we need to ensure that both the user is a valid
user
 (from the initial challenge, which collects their machine login,
 invisible
 to the user), and that they have not been denied for some reason (hence
 the
 denied group).  The re-challenge will lead to students logging into
 squid
 with their friends account.  A Cache Access Denied screen is a much
 better
 alternative.
 
 Yes it was a config issue.
 Re-writing your ACLs slightly to follow that exact logic as described
 above
 should solve your problem.
 
 
 Note that once I have this working, there will be other “denied” groups
 to
 deny on, prior to allowing access.
 
 Any suggestions or ideas are appreciated.
 
 Regards,
 Dion
 
 
 auth_param basic program c:/squid/libexec/squid_ldap_auth.exe ..
 auth_param basic children 5
 auth_param basic realm VSC
 auth_param basic credentialsttl 5 minutes
 
 external_acl_type ldapgroup LOGIN ..
 
 acl ldap-auth proxy_auth REQUIRED
 
 acl accessdenied external ldapgroup InternetAccessDeny
 acl accessallowed external ldapgroup InternetAccess
 
 http_access deny accessdenied
 
 Change the above line to:
 http_access deny accessdenied all
 
 ... which will produce the Access Denied page instead of a challenge.
 
 Any other denied groups need to go in