[squid-users] current concurrent connections

2006-05-29 Thread lokesh.khanna
Hi

I want to Plot Total current concurrent connections in squid using MRTG.
How can I do this? Which OID do I need to poll?

Thanks - Lokesh 
Disclaimer

The information contained in this e-mail, any attached files, and response 
threads are confidential and 
may be legally privileged. It is intended solely for the use of individual(s) 
or entity to which it is addressed
and others authorised to receive it. If you are not the intended recipient, 
kindly notify the sender by return 
mail and delete this message and any attachment(s) immediately.
 
Save as expressly permitted by the author, any disclosure, copying, 
distribution or taking action in reliance 
on the contents of the information contained in this e-mail is strictly 
prohibited and may be unlawful.
 
Unless otherwise clearly stated, and related to the official business of 
Accelon Nigeria Limited, opinions, 
conclusions, and views expressed in this message are solely personal to the 
author.
 
Accelon Nigeria Limited accepts no liability whatsoever for any loss, be it 
direct, indirect or consequential, 
arising from information made available in this e-mail and actions resulting 
there from.
 
For more information about Accelon Nigeria Limited, please see our website at
http://www.accelonafrica.com
**


Re: [squid-users] Squid acl containing hostnames issue

2006-05-29 Thread Tino Reichardt
* Jason Bassett <[EMAIL PROTECTED]> wrote:
> 
> I am therefore looking for the easiest and most time effective method
> of blocking rooms when required.  Hostnames seemed to be the best way.
> 
> Any ideas on this issue?

Restricting access an a per user Basis can also be done... just install
an ident daemon with your netlogon script and forbid / allow access,
based on them. Ident daemons are availably for most (all?) Openrating Systems...

I have written a redirector, were you can allow / disallow access to
users / hosts per webinterface on-the-fly ... maybe that's also an point
:)

See http://www.mcmilk.de/projects/squidwall/ for more information about
the redirector.


-- 
regards, TR


Re: [squid-users] cache storage problem? (squid 3)

2006-05-29 Thread Matus UHLAR - fantomas
> On 5/26/06, Matus UHLAR - fantomas <[EMAIL PROTECTED]> wrote:
> >is that on linux? try checking /proc/interrupts. Maybe reordering PCI cards
> >would help a bit.
> >Do you use 32 or 64bit architecture? iwith 32bit, you probably can't use
> >more than one (or two?) GB of data segment per process, which may also 
> >cause
> >some more load...

On 26.05.06 10:46, Dan Thomson wrote:
> This is on a stable debian system. 32 bit architecture... but data
> segments _should_ be well within limits.

> >Squid probably tries to find out which objects to purge from memory cache,
> >and then it decides where to save them. Also, it has to purge some objects
> >off ths disk, which results which in case of big memory and relatively 
> >small
> >disk cache results into much CPU processing.

> I've come to learn that this is a result of squid blocking for diskd.

Oh! use "aufs" on debian instead of 'diskd' - that should give you more
speed.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux IS user friendly, it's just selective who its friends are...


Re: [squid-users] Best Caching Engine

2006-05-29 Thread Kinkie
On Sat, 2006-05-27 at 12:18 -0700, Aaron Chu wrote:
> According to my discussion with some vendors, the C2300 does this:
> 1200 http transactions per sec in forward proxy (I believe the  
> effective hit ratio is 55% for this test set)
> 1800 http tps in reverse proxy (90% hit ratio for this test set)
> As for throughput, it probably comes with a few ports of gigabit  
> ethernet connectivity, but it can be expanded with additional ports.
> It has 6 144gb 15k rpm fc disks, and I assume it uses the same io  
> architecture as most of the other netapp products.

You need to watch out for ACL complexity: I've had an experience with
NetCache appliances where due to a complex ACL (ported over from a
running Squid conf), the actual performance of a NetCache box was
exactly 1/3rd of the specs (600 hits/second, down from a spec of 1.8k)

Kinkie


[squid-users] Daily digest

2006-05-29 Thread Jason Bassett

Hi

Is there a way I can recieve a daily digest of posts to this mailing list 
instead of loads of separate emails, just like the squidGuard mailing list?


Jason




[squid-users] Squid S10 appears IO bound when cache is full

2006-05-29 Thread Frank Hamersley
I am running STABLE10 on an old Firewall/Gateway system (Compaq P3 550Mhz)
and just recently have noticed some poor performance now that the cache has
reached its allowable limits.

When bring new files into cache, the system struggles and becomes seriously
IO bound before returning the objects to the browser.

I suspect that squid is trying to find the oldest files in the cache to drop
before bringing in the new data.  The outcome is that the surfer sees
abysmal performance as this process appears to proceed serially.  I also
expect the cache is designed to facilitate lookup and less so to trash out
of date files.

Is this expected behaviour or are there param tweaks I could use to get the
jump on this by clearing cache ahead of time when the system is "idle"?

Cheers, Frank.



Re: [squid-users] Daily digest

2006-05-29 Thread Henrik Nordstrom
mån 2006-05-29 klockan 12:15 +0100 skrev Jason Bassett:

> Is there a way I can recieve a daily digest of posts to this mailing list 
> instead of loads of separate emails, just like the squidGuard mailing list?

Yes, by subscribing to the squid-users-digest list instead of
squid-users.

Same list, different delivery.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid S10 appears IO bound when cache is full

2006-05-29 Thread Henrik Nordstrom
mån 2006-05-29 klockan 15:15 +1000 skrev Frank Hamersley:

> When bring new files into cache, the system struggles and becomes seriously
> IO bound before returning the objects to the browser.

Usually this is due to lack of memory.. often caused by incorrect
cache_mem settings or sometimes by having too little memory compared to
the amount of disk cache.

See the FAQ on memory usage.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Cache dir filling up - should I increase the size of cache_dir

2006-05-29 Thread yance
 
Hi all,

My cache dir is set to 5GB, and the mgr:info gives me:

Connection information for squid:
Number of clients accessing cache:  60
Number of HTTP requests received:   12752714
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   360.3
Average ICP messages per minute since start:0.0
Select loop called: 85028256 times, 24.973 ms avg

Du -h gives me
/dev/mirror/gm0s1e 10G4.8G4.8G50%/cache

So, there is 4.8G on the 5G I allocated for cache dir. If this keeps on
growing, should I increase the size of cache_dir?

Kind regards,


Yance Kowara

-Original Message-
From: Richard Lyons [mailto:[EMAIL PROTECTED] 
Sent: Monday, 29 May 2006 11:01 PM
To: qmail@list.cr.yp.to
Subject: Re: mess822 / 822field bug? 882field X-Spam-Flag matches "X-Spam:"

On Mon, 29 May 2006, Olivier Mueller wrote:

> I'm using ifspamh (http://www.gbnet.net/~jrg/qmail/ifspamh/)  to trash 
> my spams since months (with ospam), and since a few weeks, many spams 
> are getting thru the system, even if detected as spam.

http://marc.theaimsgroup.com/?l=qmail&m=106423900028713&w=2

Rick.




--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.2/349 - Release Date: 5/26/2006




Re: [squid-users] Minimum hardware required to run squid 2.5

2006-05-29 Thread Zhen Zhou

Hi,
HAVP, it is suituable for your request,

Regards,

Zhou Zhen

On 5/26/06, Luciano Pereira Areal <[EMAIL PROTECTED]> wrote:

Luciano Pereira Areal wrote:
> > Hello folks!!
> >
> > I?m intending to setup and configure a "little server" to test and run
> > squid. My question is really simple:
> >
> > What is the minimum hardware required to run squid on a Slackware 10.2
> > box, serving a small network with 20 workstations?
> >
> > Any answer or knock in the head will be gladly accepted.
> >
> > Thanks in advance and regards,
> > Luciano Pereira Areal
> >
> >

> A Pentium 200 with 128 MB of RAM will probably be more than enough. No,
> I'm not kidding.
>
> Really, if it's a new server, look for the cheapest you can buy. The
> hardware will be mostly idle anyway. 20 workstations is virtually nothing.
>
> Don't fall for the Windows / Oracle people, who say that you can't do
> anything with less than 4 top of the line CPUs. It's only that way because
> their software stinks.
>
>
> Pedro
>
Hi Pedro!

Incredible, huh? Fortunately, we do not need to mount our servers using
proprietary software. Can you imagine yourself spending more than R$
8.000,00 bucks on a shine, brand new "Pentium D 64EE
Extra-Mint-Super-Duper-Plus++" server, just to run "proxy-cache"? Neither
me... but this was exactly what my boss would do, until i found a humble
old-fashioned machine, on the IT-Support Room...

Now, after reading you e-mail, i got this old machine, and i´m running Squid
2.5 on a Pentium 133 MHz with 64 Mb RAM and 10 Gb IDE hard drive here. It´s
amazing how it works flawlessly with Squid... and the CPU usage on "top"
never goes over 35%... right now, 3 users are using my server and it is
running about 13,7%...

I pretend to enhance and add more features on this "little kid", just like
report generators and probably an antivirus. By the way, do you know any
good antivirus to run joined with squid? Clamav? F-Prot? AVG? Anything? (lol
:-) )

Thanks for the answer.

Best regards,
Luciano Pereira Areal






Re: [squid-users] Daily digest

2006-05-29 Thread Zhen Zhou

Hi, Jason,
In the maillist homepage:
http://www.squid-cache.org/mailing-lists.html, you could find some
info that I put as below:

squid-users-digest
Posting address [EMAIL PROTECTED]
Administrative address  [EMAIL PROTECTED]
Purpose This is a digested version of the normal squid-users 
list.
Messages are automatically cross-posted, and you may post to either
list.
To subscribeSend a message to [EMAIL PROTECTED]
To unsubscribe  Send a message to
[EMAIL PROTECTED]


Zhou Zhen
On 5/29/06, Jason Bassett <[EMAIL PROTECTED]> wrote:

Hi

Is there a way I can recieve a daily digest of posts to this mailing list
instead of loads of separate emails, just like the squidGuard mailing list?

Jason





Re: [squid-users] Cache dir filling up - should I increase the size of cache_dir

2006-05-29 Thread Jakob Curdes

yance schrieb:



Hi all,

My cache dir is set to 5GB, and the mgr:info gives me:

 


... nothing of interest here.



Du -h gives me
/dev/mirror/gm0s1e 10G4.8G4.8G50%/cache

 

The interesting question is, how did you set the cache replacement water 
marks ?

#Default:
# cache_swap_low 90
# cache_swap_high 95
If you leave them at the default values, your cache should grow until it 
has reached 4,75 GB in sizes, then deletion will make room until less 
than 4,5 GB are in the cache. It will then oscillate between 4,75 and 
4,5 GB. With these default values, you cache_dir should never fill up 
entirely.


If you want, you can play around with the cache_replacement_policy 
parameter, but this makes sense only AFTER your cache has reached the 
high water mark.



Hope this helps.
BTW, you did not say which version of squid you are running - and : the 
qmail discussion at the bottom of you message was slightly confusing to 
me; I assume this has nothing to do with the current thread ?


Yours,
Jakob Curdes



Re: [squid-users] Minimum hardware required to run squid 2.5

2006-05-29 Thread Jakob Curdes
We have a 2.5 squid proxy on a Pentium-II 400 MHz machine serving 50-100 
users w/o any problems an doing other things besides proxying. Albeit we 
hav added memory to the original configuration (512 MB), that made 
things a lot faster.


Yours,
Jakob Curdes



Re: [squid-users] cache storage problem? (squid 3)

2006-05-29 Thread Dan Thomson

Yeah. I think that's going to be the plan.

There's a few show-stopping bugs in squid3 right now though. I was
hoping to get something working with diskd in the meantime... it seems
like that I/O queue just grows too fast though.

On 5/29/06, Matus UHLAR - fantomas <[EMAIL PROTECTED]> wrote:


Oh! use "aufs" on debian instead of 'diskd' - that should give you more
speed.




--
Dan Thomson
Systems Engineer
Peer1 Network
1600 555 West Hastings
Vancouver, BC
V6B 4N5
866-683-7747
http://www.peer1.com


[squid-users] Help in ACL Configuration using three rules

2006-05-29 Thread Sergio Chavarri
Hi everyone,
After made a research in squid database, maybe
something is missing and I would like a feedback of
this configuration

I am trying to create an access list with “denied
sites” and denied extension format, like mp3, exe

But, at the same time I would like to allow a special
list (domains) to access without restrictions (mp3,
exe)

Actually, I can deny a list of sites and deny an
extension list(mp3,exe) at the same time, but It
doesn’t work to allow without restriction the special
list.

Please, take a look in the next lines and let me know
my mistakes in order to implement them.

Thanks a lot. Sergio

# Proxy port – 
http_port 8080

# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION
ALGORITHM 
#   #proxy
 icp 
#   #  hostname type port 
 port  options   
#   #    -
-  ---   
cache_peer  proxy.mysite.comparent8080 0
default no-query allow-miss login=PASS

#  TAG: hierarchy_stoplist   
hierarchy_stoplist cgi-bin ?  

#  TAG: no_cache  
acl QUERY urlpath_regex cgi-bin \?  
no_cache deny QUERY

#  TAG: cache_mem   (bytes)
cache_mem 64 MB

#  TAG: cache_dir
cache_dir ufs /var/spool/squid 1000 64 256

#  TAG: auth_param
auth_param basic children 5   

auth_param basic realm Squid proxy-caching web server 

auth_param basic credentialsttl 2 hours   

  

#  TAG: refresh_pattern
#Suggested default:   
   
refresh_pattern ^ftp:   144020% 10080 
   
refresh_pattern ^gopher:14400%  1440  
   
refresh_pattern .   0   20% 4320  
   


# ACCESS CONTROLS
#  TAG: acl  
# Local networks with “C” IP class: office1,office2,
office3
acl office1 src 7.24.10.0/24
acl office2 src 7.24.50.0/24
acl office3 src 7.24.60.0/24

acl SSL_ports port 443 563 8143

acl Safe_ports port 80  # http
 
acl Safe_ports port 21  # ftp 
 
acl Safe_ports port 443 563 # https, snews
 
acl Safe_ports port 70  # gopher  
 
acl Safe_ports port 210 # wais
 
acl Safe_ports port 1025-65535  # unregistered ports  
 
acl Safe_ports port 280 # http-mgmt   
 
acl Safe_ports port 488 # gss-http
 
acl Safe_ports port 591 # filemaker   
 
acl Safe_ports port 777 # multiling http  
 
acl CONNECT method CONNECT
 

# acl deny for web radio stream - 
acl webRadioReq1 req_mime_type -i ^video/x-ms-asf$
 
acl webRadioReq2 req_mime_type -i
^application/vnd.ms.wms-hdr.asfv1$
acl webRadioReq3 req_mime_type -i
^application/x-mms-framed$
 
acl WMP browser Windows-Media-Player/*
  
 

# acl deny for extensions 
 
acl BlockExt url_regex -i \.mp3$ \.asx$ \.wma$ \.wmv$
\.avi$ \.mpeg$ \.mpg$ \.qt
$ \.ram$ \.rm$ \.iso$ \.wav$ \.exe$   
 

#Special domain without restriction (exe, mp3..)
acl specialdomain dstdomain « /etc/squid/specialdomain
»


# Access deny for Web radio /Stream  
http_access deny WMP all 
http_access deny webRadioReq1 all
http_access deny webRadioReq2 all
http_access deny webRadioReq3 all
 
http_reply_access deny webRadioRep1 all  
http_reply_access deny webRadioRep2 all  
http_reply_access deny webRadioRep3 all  
 
http_access deny BlockExt

#Allow specialdomain without BlockExt
http_access deny BlockExt !specialdomain

#Extension for domain & path
#Extension List using files AAA 
acl deniedsites  url_regex “/etc/squid/deniedsites”

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam
protection around 
http://mail.yahoo.com 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] Alternative to standard Squid authentication schemas

2006-05-29 Thread Alberto Avi

Hi Chris,

   thank you very much for your suggestions.
I tried them but for my proxy solution is very important have got a user 
session and not a ip session.

In fact I use a content filtery solution which work with user group policy.
For this reason i tried an external_acl_type with ttl=0 to force the 
helper to receive every session authentication for  the client requests:


external_acl_type user-check ttl=0 %SRC /path/to/custom-helper
acl loggedIn external user-check

http_access deny !loggedIn
http_access allow siteIPs
http_access deny all

deny_info http://authentication.my.domain/authenticate.php loggedIn

and this this the source of custom-helper:

#!/bin/bash
log="/usr/local/prod/squid-2.5.STABLE14/var/logs/squid-auth.log"

while read line
do
   echo $line >> $log
   echo OK user=foouser
done

i don't understand why in the access.log some request came without ident 
( - ):


1148930239.227123 10.182.35.253 TCP_MISS/302 475 GET 
http://www.google.com/ foouser DIRECT/66.249.85.99 text/html
1148930239.624397 10.182.35.253 TCP_MISS/200 4339 GET 
http://www.google.it/ foouser DIRECT/66.249.85.104 text/html
1148930242.887134 10.182.35.253 TCP_MISS/200 4339 GET 
http://www.google.it/ - DIRECT/66.249.85.99 text/html
1148930242.936 66 10.182.35.253 TCP_MISS/304 193 GET 
http://www.google.it/intl/it_it/images/logo.gif - DIRECT/66.249.85.104 
text/html


Alberto.


Chris Robertson wrote:

[EMAIL PROTECTED] wrote:


Hello,

   there is a way to authenticate Squid users through an SSL form ?

I can't use basic auhtentication schema for security reasons.
I can't use NTLM authentication schema because my Windows Domains 
aren't trusted togheter.
I'd like to use digest authentication schema but the users's password 
on my LDAP are encrypted so isn't easy to implement it.


Thank you very much for your attention and for your time,

Alberto.


The short answer is that Squid, by itself can not perform this task.  
However, the external_acl_type and deny_info directives along with a 
webserver, and back end LDAP query should allow you to perform this 
task.  You will have to store (and lookup) session information outside 
squid, and this will preclude seeing user names in the access.log.


Here's the basic idea:  You have a eternal ACL helper that takes the 
client IP and performs a lookup.  If a valid session is found, access 
is allowed.  If not, access is denied and the deny_info directive 
refers the browser to a login page (hosted on a webserver) that 
creates the session data (which can be routinely cleared text files, 
or a database).  Here's a guideline of the squid.conf portion...


external_acl_type user-check ttl=5 %SRC /path/to/helper
acl loggedIn external user-check

http_access deny !loggedIn
http_access allow siteIPs
http_access deny all

deny_info http://authentication.my.domain/authenticate.php loggedIn

Creating the helper, authentication page and back end are left as 
exercises for the reader.


Chris





RE: [squid-users] Help in ACL Configuration using three rules

2006-05-29 Thread Jason Staudenmayer
This looks like your problem
>http_access deny BlockExt
>
>#Allow specialdomain without BlockExt
>http_access deny BlockExt !specialdomain
>
You have a deny all first remove that first one and try it again.

Jason

-Original Message-
From: Sergio Chavarri [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 29, 2006 3:09 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Help in ACL Configuration using three rules


Hi everyone,
After made a research in squid database, maybe
something is missing and I would like a feedback of
this configuration

I am trying to create an access list with "denied
sites" and denied extension format, like mp3, exe

But, at the same time I would like to allow a special
list (domains) to access without restrictions (mp3,
exe)

Actually, I can deny a list of sites and deny an
extension list(mp3,exe) at the same time, but It
doesn't work to allow without restriction the special
list.

Please, take a look in the next lines and let me know
my mistakes in order to implement them.

Thanks a lot. Sergio

# Proxy port - 
http_port 8080

# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION
ALGORITHM 
#   #proxy
 icp 
#   #  hostname type port 
 port  options   
#   #    -
-  ---   
cache_peer  proxy.mysite.comparent8080 0
default no-query allow-miss login=PASS

#  TAG: hierarchy_stoplist   
hierarchy_stoplist cgi-bin ?  

#  TAG: no_cache  
acl QUERY urlpath_regex cgi-bin \?  
no_cache deny QUERY

#  TAG: cache_mem   (bytes)
cache_mem 64 MB

#  TAG: cache_dir
cache_dir ufs /var/spool/squid 1000 64 256

#  TAG: auth_param
auth_param basic children 5   

auth_param basic realm Squid proxy-caching web server 

auth_param basic credentialsttl 2 hours   

  

#  TAG: refresh_pattern
#Suggested default:   
   
refresh_pattern ^ftp:   144020% 10080 
   
refresh_pattern ^gopher:14400%  1440  
   
refresh_pattern .   0   20% 4320  
   


# ACCESS CONTROLS
#  TAG: acl  
# Local networks with "C" IP class: office1,office2,
office3
acl office1 src 7.24.10.0/24
acl office2 src 7.24.50.0/24
acl office3 src 7.24.60.0/24

acl SSL_ports port 443 563 8143

acl Safe_ports port 80  # http
 
acl Safe_ports port 21  # ftp 
 
acl Safe_ports port 443 563 # https, snews
 
acl Safe_ports port 70  # gopher  
 
acl Safe_ports port 210 # wais
 
acl Safe_ports port 1025-65535  # unregistered ports  
 
acl Safe_ports port 280 # http-mgmt   
 
acl Safe_ports port 488 # gss-http
 
acl Safe_ports port 591 # filemaker   
 
acl Safe_ports port 777 # multiling http  
 
acl CONNECT method CONNECT
 

# acl deny for web radio stream - 
acl webRadioReq1 req_mime_type -i ^video/x-ms-asf$
 
acl webRadioReq2 req_mime_type -i
^application/vnd.ms.wms-hdr.asfv1$
acl webRadioReq3 req_mime_type -i
^application/x-mms-framed$
 
acl WMP browser Windows-Media-Player/*
  
 

# acl deny for extensions 
 
acl BlockExt url_regex -i \.mp3$ \.asx$ \.wma$ \.wmv$
\.avi$ \.mpeg$ \.mpg$ \.qt
$ \.ram$ \.rm$ \.iso$ \.wav$ \.exe$   
 

#Special domain without restriction (exe, mp3..)
acl specialdomain dstdomain < /etc/squid/specialdomain
>


# Access deny for Web radio /Stream  
http_access deny WMP all 
http_access deny webRadioReq1 all
http_access deny webRadioReq2 all
http_access deny webRadioReq3 all
 
http_reply_access deny webRadioRep1 all  
http_reply_access deny webRadioRep2 all  
http_reply_access deny webRadioRep3 all  
 
http_access deny BlockExt

#Allow specialdomain without BlockExt
http_access deny BlockExt !specialdomain

#Extension for domain & path
#Extension List using files AAA 
acl deniedsites  url_regex "/etc/squid/deniedsites"

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam
protection around 
http://mail.yahoo.com 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protect

RE: [squid-users] Help in ACL Configuration using three rules

2006-05-29 Thread Sergio Chavarri
Thank you Jason for the advice. Its works!
Sergio

--- Jason Staudenmayer <[EMAIL PROTECTED]>
wrote:

> This looks like your problem
> >http_access deny BlockExt
> >
> >#Allow specialdomain without BlockExt
> >http_access deny BlockExt !specialdomain
> >
> You have a deny all first remove that first one and
> try it again.
> 
> Jason
> 
> -Original Message-
> From: Sergio Chavarri
> [mailto:[EMAIL PROTECTED] 
> Sent: Monday, May 29, 2006 3:09 PM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Help in ACL Configuration
> using three rules
> 
> 
> Hi everyone,
> After made a research in squid database, maybe
> something is missing and I would like a feedback of
> this configuration
> 
> I am trying to create an access list with "denied
> sites" and denied extension format, like mp3, exe
> 
> But, at the same time I would like to allow a
> special
> list (domains) to access without restrictions (mp3,
> exe)
> 
> Actually, I can deny a list of sites and deny an
> extension list(mp3,exe) at the same time, but It
> doesn't work to allow without restriction the
> special
> list.
> 
> Please, take a look in the next lines and let me
> know
> my mistakes in order to implement them.
> 
> Thanks a lot. Sergio
> 
> # Proxy port - 
> http_port 8080
> 
> # OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION
> ALGORITHM 
> #   #   
> proxy
>  icp 
> #   #  hostname type
> port 
>  port  options   
> #   #   
> -
> -  ---   
> cache_peer  proxy.mysite.comparent8080 0
> default no-query allow-miss login=PASS
> 
> #  TAG: hierarchy_stoplist   
> hierarchy_stoplist cgi-bin ?  
> 
> #  TAG: no_cache  
> acl QUERY urlpath_regex cgi-bin \?  
> no_cache deny QUERY
> 
> #  TAG: cache_mem   (bytes)
> cache_mem 64 MB
> 
> #  TAG: cache_dir
> cache_dir ufs /var/spool/squid 1000 64 256
> 
> #  TAG: auth_param
> auth_param basic children 5 
>  
> 
> auth_param basic realm Squid proxy-caching web
> server 
> 
> auth_param basic credentialsttl 2 hours 
>  
> 
> 
>  
> 
> #  TAG: refresh_pattern
> #Suggested default: 
>  
>
> refresh_pattern ^ftp:   144020%
> 10080 
>
> refresh_pattern ^gopher:14400%  1440
>  
>
> refresh_pattern .   0   20% 4320
>  
>
> 
> 
> # ACCESS CONTROLS
> #  TAG: acl  
> # Local networks with "C" IP class: office1,office2,
> office3
> acl office1 src 7.24.10.0/24
> acl office2 src 7.24.50.0/24
> acl office3 src 7.24.60.0/24
> 
> acl SSL_ports port 443 563 8143
> 
> acl Safe_ports port 80  # http  
>  
>  
> acl Safe_ports port 21  # ftp   
>  
>  
> acl Safe_ports port 443 563 # https, snews  
>  
>  
> acl Safe_ports port 70  # gopher
>  
>  
> acl Safe_ports port 210 # wais  
>  
>  
> acl Safe_ports port 1025-65535  # unregistered ports
>  
>  
> acl Safe_ports port 280 # http-mgmt 
>  
>  
> acl Safe_ports port 488 # gss-http  
>  
>  
> acl Safe_ports port 591 # filemaker 
>  
>  
> acl Safe_ports port 777 # multiling http
>  
>  
> acl CONNECT method CONNECT  
>  
>  
> 
> # acl deny for web radio stream - 
> acl webRadioReq1 req_mime_type -i ^video/x-ms-asf$  
>  
>  
> acl webRadioReq2 req_mime_type -i
> ^application/vnd.ms.wms-hdr.asfv1$
> acl webRadioReq3 req_mime_type -i
> ^application/x-mms-framed$
>  
> acl WMP browser Windows-Media-Player/*  
>  
> 
>  
>  
> 
> # acl deny for extensions   
>  
>  
> acl BlockExt url_regex -i \.mp3$ \.asx$ \.wma$
> \.wmv$
> \.avi$ \.mpeg$ \.mpg$ \.qt
> $ \.ram$ \.rm$ \.iso$ \.wav$ \.exe$ 
>  
>  
> 
> #Special domain without restriction (exe, mp3..)
> acl specialdomain dstdomain <
> /etc/squid/specialdomain
> >
> 
> 
> # Access deny for Web radio /Stream  
> http_access deny WMP all 
> http_access deny webRadioReq1 all
> http_access deny webRadioReq2 all
> http_access deny webRadioReq3 all
>  
> http_reply_access deny webRadioRep1 all  
> http_reply_access deny webRadioRep2 all  
> http_reply_access deny webRadioRep3 all  
>  
> http_access deny BlockExt
> 
> #

Re: [squid-users] Cache dir filling up - should I increase the size of cache_dir

2006-05-29 Thread yance_kowara
Hi Jakob,

Thank you for the hint.

I am using Squid-2.5 stable13 on FreeBSD 6.0. Release.

Yes I am using the default watermark value.

I am only a bit concerned because these few days the kernel spit out
message like 

 squid kernel: /cache: optimization changed from SPACE to TIME
or
 squid kernel: /cache: optimization changed from TIME to SPACE

I guess it's just telling me that the space is nearly filled up.

I am sorry for including the qmail discussion thread in the email, not
sure how it got there, but this is what happens if you are trying to work
while rushing to bed at the same time. :)

Kind regards,

Yance Kowara





> yance schrieb:
>
>>
>>Hi all,
>>
>>My cache dir is set to 5GB, and the mgr:info gives me:
>>
>>
>>
> ... nothing of interest here.
>
>>
>>Du -h gives me
>>/dev/mirror/gm0s1e 10G4.8G4.8G50%/cache
>>
>>
>>
> The interesting question is, how did you set the cache replacement water
> marks ?
> #Default:
> # cache_swap_low 90
> # cache_swap_high 95
> If you leave them at the default values, your cache should grow until it
> has reached 4,75 GB in sizes, then deletion will make room until less
> than 4,5 GB are in the cache. It will then oscillate between 4,75 and
> 4,5 GB. With these default values, you cache_dir should never fill up
> entirely.
>
> If you want, you can play around with the cache_replacement_policy
> parameter, but this makes sense only AFTER your cache has reached the
> high water mark.
>
>
> Hope this helps.
> BTW, you did not say which version of squid you are running - and : the
> qmail discussion at the bottom of you message was slightly confusing to
> me; I assume this has nothing to do with the current thread ?
>
> Yours,
> Jakob Curdes
>
>




Re: [squid-users] Best Caching Engine

2006-05-29 Thread Scott Jarkoff

On 5/27/06, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:


Does anyone know which is the best (commercial or freeware) caching
engine for Large ISP? Is there any comparison sheet between different
cache engine?


I have heard really good things about BlueCoat and their array of
caching products.
--
Scott Jarkoff


[squid-users] inconsistent caches using a sibling cache hierarchy

2006-05-29 Thread Domingos Parra Novo
Hiyas,

I'm using a pool of (4) squid servers as a web accelerator for a slow backend 
(vignette, to be more exact). Right now, almost everything works like a charm, 
besides one thing.

I'm using a sibling hierarchy betwen those machines, and the cache mechanism 
works like that:

1 - the request arrives on any squid server;
2 - the server tries to find the object on its own cache (delivers to the 
client if found, goes to the next step if not);
3 - the server uses ICP to contact its siblings, and check if any of them got 
the content (retrieves from the sibling and delivers to the client if found, 
goes to the next step if not);
4 - if all the above fails, the request is directed to the (slow) backend 
server.

But, this backend server contain a "expire/purge" feature. I've written a 
gateway (using the "purge" tool, and converting it to a simple CGI). So, 
anytime my webmasters change an object on our CMS (which is also the backend 
server), an automatic "purge" is generated on all our squid servers.This 
request works as expected, but, usually takes some seconds (around 10 to 15 
seconds) to complete.

If, for any reason (high load, for example), a purge request (for object "foo") 
reaches servers 1, 2 and 3 (but haven't got the time to purge the object on 
server 4), and a new request for the object "foo" arrives on server 1 (which 
already expired this object), the request is redirected from server 1 to server 
4 (which contains an old version of the object). In a few words, I sometimes 
get a invalid cache, with old objects on my squid servers.

I tried to circunvent this situation using the keyword "refresh_stale_hit", but 
it didn't help anything (I configured it to 30 seconds, but it seems that squid 
does not consider a purged object "stale").

I would like to know if anyone got any solution for my problem (forcing an 
specific object to be retrieved from the origin, and not from the cache, if 
just expired).

I know I can use other kinds of hierarchies, but I was not able to find one 
which would guarantee some kind of high availability (with no SPOFs), and a 
small number of requests on my backend.

By the way, would htcp help me on this task? I know its "smarter" then ICP, but 
I haven't seen much documentation about any of them, to tell you the truth.

Thanks in advance,

Domingos.

--
Domingos Parra Novo
Terra Networks Brasil
[EMAIL PROTECTED]




[squid-users] how to write this urlpath_regex

2006-05-29 Thread huang mingyou

hello,list.
 I get a problem when I write a filter rule. I have two urls.
http://host/bbs/1.php and http://host/bbs/foo/bar/x.jpg
 now,I want the squid can cache the jpg file but no cache for php
or other script file.
if I use urlpath_regex bbs php or other rule,the
http://host/bbs/foo/bar/x.jpg wile be filter too.
so ,how to write a rule,fileter bbs in the url but if have jpg ,then not filter.

--
Huang Mingyou


[squid-users] Access Report

2006-05-29 Thread nonama
Dear All, 
I have a question on the access log. How do I change
the date value in the access log so that it can be
readable? 
Is there any tool that I can use to generate report on
user access (where, when & what time) and also top 10
popular web visited.

YOur help is highly appreciated.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] Access Report

2006-05-29 Thread Arianto C Nugroho

Quoting nonama <[EMAIL PROTECTED]>:


Dear All,
I have a question on the access log. How do I change
the date value in the access log so that it can be
readable?


 imho it's quite readable .. all the field you're looking for is there ..


Is there any tool that I can use to generate report on
user access (where, when & what time) and also top 10
popular web visited.


 you might want to check calamaris


Re: [squid-users] Access Report

2006-05-29 Thread squid

nonama wrote:
Dear All, 
I have a question on the access log. How do I change

the date value in the access log so that it can be
readable? 
Is there any tool that I can use to generate report on

user access (where, when & what time) and also top 10
popular web visited.

YOur help is highly appreciated.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
  

Hello,

The time format in the squid access log is in Time stamp (seconds since 
Jan 1, 1970) with millisecond resolution.you can see the time in human 
readable format by a simple perl script  like


#!/usr/bin/perl
#Give the time stamp here to see in human readable format
$timeinmsec="1146034225.069";
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = 
localtime($timeinmsec);
print $sec," ",$min," ",$hour," ",$mday," ",$mon," ",$year," ",$wday," 
",$yday," ",$isdst;


There are many tools  for squid Log analysis.You can get those tools 
from http://www.squid-cache.org/Scripts/


Thanks,
Visolve Squid Team,
http://squid.visolve.com