RE: [squid-users] How may I block MSN Messenger...

2003-08-10 Thread Henrik Nordstrom
ons 2003-08-06 klockan 10.32 skrev Boniforti Flavio:
> > These are allowed.
> > 
> > Which rules dud you have which you think should have blocked this?
> 
> 
> acl msn_no_block src 10.167.211.11/255.255.255.255
> acl msn_server req_mime_type ^application/x-msn-messenger


Are you sure the clients send requests with this content type?
access.log only shows the content type of the replies, not requests. To
see the content type of requests you need to enable log_mime_hdrs and
extract the Content-Type from the first block of headers [] (Note: the
second block [] contains the reply headers).

For what it is worth the log you sent only contained the IP which should
be allowed and these were correctly allowed..


-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

Please consult the Squid FAQ and other available documentation before
asking Squid questions, and use the squid-users mailing-list when no
answer can be found. Private support questions is only answered
for a fee or as part of a commercial Squid support contract.

If you need commercial Squid support or cost effective Squid and
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]



Re: [squid-users] Squid Authentication and MS Active Directory

2003-08-10 Thread Henrik Nordstrom
On Sunday 10 August 2003 15.37, Farid IZEM wrote:

> Which helpers must i use to authenticate my users against W2K AD

The LDAP helpers works fine for this purpose.

> ??? How do i configure it ???

See the documentation to the LDAP helpers.

> Does i need Samba to do so ???

No, but using Samba winbind may also work provided your AD is 
configured to support NT clients/servers.

Regards
Henrik

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] authentication issues

2003-08-10 Thread Henrik Nordstrom
On Tuesday 05 August 2003 19.30, Downing, Mark wrote:

> I have finally figured out how to make the squid_ldap_auth work
> with an Active Directory tree that one of our divisions has setup.
> My problem is now how to I configure squid to work with BOTH
> msnt_auth and squid_ldap_auth. I still need to be able to
> authenticate users in the NT domain.

Search for "Open2" in the archives.

Regards
Henrik

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] squid + axel - netiquete idea

2003-08-10 Thread Robert Collins
Bob, seems to me you are missing the point of network load balancing.
Someone with 10 modem lines should have their ISP performing load
balancing and redundancy at a IP level, not by manual load balancing.

This form of 'acceleration' dramatically increases the overhead for web
servers - i.e. checking databases, logging requests, checking access
control lists.

The act of transmission is only one part of the load involved in
handling a request, and these 'accelerators' -only- share that part of
the load, everything else is duplicated and wasted.

There is a place for swarming - but not in the client-server model of
HTTP. Things like gnutella, where swarming is a part of the protocol,
are an appropriate place and if someone with 10 modem lines wants to
use application level load balancing for static file downloads, gnutella
is probably an ideal tool - for them.

Rob


-- 
GPG key available at: .


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] deny_info and http_reply_access

2003-08-10 Thread Schelstraete Bart
Joshua Brindle wrote:

after trying to use deny_info with my http_reply_access
acl and being unsuccessful i searched the web and found
that others had that problem and that it was a known limitation.
My question is, what kind of limitation is it? one where the code
just hasn't been written or is it a design limitation? (in squid-3)
 



3. Known limitations

There is a few limitations to this version of Squid that we hope to 
correct in a later release

*deny_info*

   deny_info only works for http_access, not for the acls listen in
   http_reply_access


I didn't see anything about this in version 3 -yet.

rgrds,

  Bart










[squid-users] [PATCH] external_acl and http_reply_access

2003-08-10 Thread Joshua Brindle
>On Sunday 10 August 2003 10.31, Joshua Brindle wrote:
>
>> X-Naughty header. I've been playing around with an external acl
>> and I always get data back if I use something like %LOGIN or %PATH
>> but I cannot get any header info back with %{header} . In the
>> squid.conf it says "request header" but i figured that was just
>> an oversight of using external acl's in http_access but alas
>> it does not appear to be giving me reply headers back :(
>
>
>There is no external acl method for accessing reply headers, only 
>request headers.

enclosed trivial patch fixes this by checking for ch->reply and
sending
reply headers, and ch->request and sending request headers, 
I don't know if there is a situation where neither of these would be
true, 
I hope not...

>Also, in squid-2.5 external acl methods is not suitable for use in 
>http_reply_access as http_reply_access can not wait for any external 
>lookups to complete. The latter is addressed in Squid-3 I think.
>
I am using squid-3
>
>(access controls using reply headers must take place in 
>http_reply_access, as they need access to the reply and http_access 
>executes before the request is forwarded..)
>
>Regards
>Henrik


ext-acl_reply_header.diff
Description: Binary data


Re: [squid-users] CPU utilization performance issue

2003-08-10 Thread Tay Teck Wee
Hi Bart,

I'm using reiserfs. aufs coz its more suitable for
linux. 

--
Wolf

 --- Schelstraete Bart <[EMAIL PROTECTED]> wrote:
> Hzllo,
> 
> Why not using Reiser instead of ext3 with diskd?
> I read a lot of articles that are saying that reiser
> is much fast for 
> Squid. (a lot of 'small files')
> 
> 
> 
> 
>Bart
> 
> Zand, Nooshin wrote:
> 
> >Hi,
> >I am just wonder why you are not using diskd.
> >Based on Benchmarking that I read, diskd provides
> faster I/O performance.
> >I am planning to run squid on Linux Redhat 9.0 and
> thinking to use ext3 and diskd.
> >Thanks,
> >Nooshin
> >
> >-Original Message-
> >From: Tay Teck Wee
> [mailto:[EMAIL PROTECTED]
> >Sent: Friday, August 08, 2003 2:22 AM
> >To: squid-users
> >Subject: Re: [squid-users] CPU utilization
> performance issue
> >
> >
> >Hi everyone,
> >
> >thanks for the input. The ACL list have since been
> >slightly altered, using only src(22 entries),
> >dstdomain(114 entries) and url_regex(20 entries). I
> am
> >currently on kernel 2.4.20-19.9 so the
> Hyperthreading
> >might hv been optimized. 
> >
> >Now the machine is handling about 110 req/s but
> again
> >the CPU will climb to abt 90-95%. Is it possible
> for
> >my squid box to go beyond 180 req/s, which is the
> peak
> >for each proxies in the existing pool(ISP env)? I
> am
> >trying to replace my existing NetCaches with
> >squids...one box for one box.  
> >
> >I am wondering if its because reiserfs will consume
> >more CPU than other fs like ext3? Will changing my
> >cache partitions to reiserf lower down the CPU
> usage?
> >Or can anyone suggest other possible improvements?
> >Thanks.
> >
> >my 3 caching partitions are on 3 separate disks:-
> >/dev/sdb1  /cdata1  reiserfs notail,noatime 1 2
> >/dev/sdc1  /cdata2  reiserfs notail,noatime 1 2
> >/dev/sdd1  /cdata3  reiserfs notail,noatime 1 2
> >
> >--
> >Wolf
> >
> > --- Tay Teck Wee <[EMAIL PROTECTED]> wrote:
> >
> >Hi,
> >  
> >
> >>when I'm getting about 90 req/s or 800 concurrent
> >>connection(according to my foundry L4) to my
> >>squid(RedHat 8.0/2.5 Stable3 w deny_info patch),
> the
> >>CPU utilization avg abt 80%. How do I lower the
> CPU
> >>utilization of my squid? Thanks.  
> >>
> >>Below is my machine specs:-
> >>
> >>Intel Xeon single-processor 2.4GHz(DELL 2650)
> >>2G physical RAM(w 2G swap under linux)
> >>2X 33G for everything except caching (mirror)
> >>3X 33G for caching (volume) 
> >>
> >>/dev/sda7   505605 68437411064
> 
> >>15% /
> >>/dev/sda1   124427  9454108549
>  
> >>9% /boot
> >>/dev/sdb1 35542688201248  35341440
>  
> >>1% /cdata1
> >>/dev/sdc1 35542688200888  35341800
>  
> >>1% /cdata2
> >>/dev/sdd1 35542688200940  35341748
>  
> >>1% /cdata3
> >>/dev/sda3  1035692 49796933284
>  
> >>6% /home
> >>none   1032588 0   1032588
>  
> >>0% /dev/shm
> >>/dev/sda5  1035660691648291404
> 
> >>71% /usr
> >>/dev/sda6   505605 76236403265
> 
> >>16% /usr/local
> >>/dev/sda8 29695892 83456  28103936
>  
> >>1% /var
> >>
> >>Below is my squid.conf(only the essential). 
> >>
> >>For ACL, basically I hv 3 acl list(in 3 separate
> >>files), one containing allowable IPs while the
> other
> >>contains deny IPs. I also hv 3 list of banned
> sites
> >>list(in 3 separate files).:-
> >>
> >>http_port 8080
> >>acl QUERY urlpath_regex cgi-bin \?
> >>no_cache deny QUERY
> >>cache_mem 400 MB
> >>cache_swap_low 92
> >>cache_swap_high 95
> >>maximum_object_size 2 MB
> >>maximum_object_size_in_memory 100 KB
> >>cache_replacement_policy heap GDSF
> >>memory_replacement_policy heap GDSF
> >>cache_dir aufs /cdata1 16000 36 256
> >>cache_dir aufs /cdata2 16000 36 256
> >>cache_dir aufs /cdata3 16000 36 256
> >>cache_access_log
> /var/log/cachelog/cache.access.log
> >>cache_log /var/log/cachelog/cache.log
> >>cache_store_log none
> >>quick_abort_min -1 KB
> >>acl all src 0.0.0.0/0.0.0.0
> >>acl manager proto cache_object
> >>acl localhost src 127.0.0.1/255.255.255.255
> >>#3 banned list files
> >>acl SBA dstdomain "/usr/local/squid/etc/SBA.txt"
> >>acl CNB dstdomain "/usr/local/squid/etc/CNB.txt"
> >>acl CNB2 url_regex "/usr/local/squid/etc/CNB2.txt"
> >>#3 access list files
> >>acl NetTP src "/usr/local/squid/etc/NetTPsrc.acl"
> >>acl NetDeny src "/usr/local/squid/etc/deny.acl"
> >>acl NetAllow src "/usr/local/squid/etc/allow.acl"
> >>http_access deny SBA
> >>http_access deny CNB
> >>http_access deny CNB2
> >>http_access deny NetDeny
> >>http_access allow NetAllow
> >>http_access allow NetTP
> >>http_access deny all
> >>http_reply_access allow all
> >>cache_effective_user squid
> >>cache_effective_group squid
> >>logfile_rotate 10
> >>deny_info ERR_SBA_DENIED SBA
> >>deny_info ERR_CNB_DENIED CNB CNB2
> >>memory_pools off
> >>coredump_dir /var/log/cachelog
> >>
> >>Thanks again!
> >>
> >>Regards,
> >>Wolf
> >>
> >>
> >>_

Re: [squid-users] squid + axel

2003-08-10 Thread Bob Arctor
at first - not for every file it is worth doing  (only for very large files)
second - if you don't have more than one connection to internet (like 
multiple dialup lines) it is pointless - and thus it obviously shouldn't be 
_default_ option .

the main thing it is usefull for is listening to mp3's live from server 
without need of downloading them - while still having ablity to cache them .

in this case even if squid will open i.e. 10 connections via all gateways 
avaliable in LAN/MAN/WAN, overall bandwitch will be limited with bitrate of 
an mp3 file played live .

i don't see this is 'antisocial' as otherwise users will just go for _faster_ 
download lines, NAT'ed like ADSL services which are common, and thus their 
web servers will not have multiple IP adresses to load balance (ADSL doesn't 
usually have static IP number, or - if it is shared ADSL - user will not get 
any IP number at all, just ablity to download via NAT)

why having 10 slow static IP lines via different ISP's is better than one 
fast ADSL with no ip number ?

because when user will establish own webserver it can be aliased on those 10 
connections, and set up load balancing rule on his DNS server.

also - it would provide better failover mechanism, when one of servers is 
down/overloaded, connection will continue over remaining ones 
same is with ISP lines users have. 
if one line is down/overloaded, and there is one not used at all, traffic 
could go via it.



On Monday 11 August 2003 01:31, Antony Stone wrote:
> On Sunday 10 August 2003 11:38 pm, Bob Arctor wrote:
> > it accelerates in two ways :
> > 1)if you have more than one connection to the internet, and your proxy
> > does load balance, or you have multiple interfaces in your machine,
> > multiple parts of file are downloaded via multiple connections
> >
> > 2) if server is load balanced, and it's domain have many aliases, chosen
> > round robin as you connect, each part of file is downloaded from
> > different server
>
> Both of these methods seem to assume that your connection to the Internet
> is faster than the rest of the path to the remote server - which I frankly
> feel is unlikely.
>
> I really don't see that opening up multiple connections for downloading
> from a remote server is going to improve your network's performance,
> unless there is a deliverate throttle being placed in your path in order
> to share available bandwidth with other users, in which case trying to
> bypass it is almost certainly against the Acceptable Use Policy of the
> system you are connected through.
>
> If the remote servers are on a round robin DNS, then they're already going
> to be nicely load balanced for different users each downloading complete
> files, so there's no point in creating additional connections for each
> server by only downloading part of the file from it.
>
> I certainly can't see favourable support for this sort of thing getting
> included in Squid.
>
> Regards,
>
> Antony.

-- 
-- 


Re: [squid-users] Log files too large

2003-08-10 Thread Schelstraete Bart
Schelstraete Bart wrote:

Gator wrote:

I am finding that Squid (2.5.STABLE2) will fail when the log files reach
a certain size.  I moved them off to access.log.2 and store.log.2 and
life was fine again.
1624135928 Aug  8 10:36 access.log.2
2147483647 Aug  8 09:02 store.log.2
How do I set up these files to rotate automatically so this doesn't
happen again?
 

You cannot do that automatically. What I'm doing is create a cronjob 
that  rotates the logfiles every night and is creating statistics for 
that day.
Squid doesn't have a limit on the file size, but the filesystem has a 
2Gb filesize limit.
Sorry my mistake. Squid should be modified to allow files bigger then 2 
gigs...but the question is: Who wants that
I think nobody wants to use this.



rgrds,

 Bart



Re: [squid-users] [ Squid Cache: Version 3.0-PRE2-20030806 ] [ SSL]

2003-08-10 Thread Imad Soltani
i make some changes according to your last post and this
http://www.squid-cache.org/mail-archive/squid-users/200306/0551.html

On Sat, 2003-08-09 at 17:57, Henrik Nordstrom wrote:
> On Saturday 09 August 2003 17.32, Imad Soltani wrote:
> > Hello all ,
> > Thanks Henrik for the post
> >
> > I now tried to make a new functionnal and minimal squid.conf from 0
> 
> > and the i got an , after keys exchange , an access denied to
> > proxy_hostname
> 
> 
> You also need http_access allow the request.
> 

my minisquid.conf : 

visible_hostname proxy_hostname

http_port ip_proxy:80 accel defaultsite=ip_web_server

https_port ip_proxy:443 accel cert=s.crt key=s.key
defaultsite=ip_web_server

acl all src 0.0.0.0/0.0.0.0

acl http proto http

acl https proto https

cache_peer ip_proxy parent 80 0 no-query originserver
name=http.web_server_hostname
cache_peer ip_proxy parent 443 0 no-query originserver ssl
name=https.web_server_hostnme

cache_peer_access https.web_server_hostnme allow https

cache_peer_access http.web_server_hostnme allow http

never_direct allow all





And same error 
my squid.conf is not correct anyway ? 



> Regards
> Henrik



Re: [squid-users] squid: ERROR: no running copy

2003-08-10 Thread Marc Elsen


Colin wrote:
> 
> Hi,
> 
> I had Squid running perfectly as a reverse proxy but wanted to enable
> the useragent log option. I reconfigured and reinstalled, now squid will
> start but wont work. I can run it as many times as I want, creating more

 What do you mean by 'start  but won't work' ?
 Especially what's in cache.log for this particular stadium.

> and more squid processes. When I want to reconfigure with the -k option
> I get this error: "squid: ERROR: no running copy". I get the same error
> when using the -k shutdown option. I am running it on Red Hat.
> Can anybody help me out?

  You probably got squid exiting , upon attempts to start it.
  Hence reconfigure or shutdown actions may be meaningless when there's
no
  running squid.

  P.s: which version of squid are you using ?


> 
> Thanks in advance,
> 
> Colin

-- 

 'Love is truth without any future.
 (M.E. 1997)


Re: [squid-users] squid + axel

2003-08-10 Thread Antony Stone
On Sunday 10 August 2003 11:38 pm, Bob Arctor wrote:

> it accelerates in two ways :
> 1)if you have more than one connection to the internet, and your proxy does
> load balance, or you have multiple interfaces in your machine, multiple
> parts of file are downloaded via multiple connections
>
> 2) if server is load balanced, and it's domain have many aliases, chosen
> round robin as you connect, each part of file is downloaded from different
> server

Both of these methods seem to assume that your connection to the Internet is 
faster than the rest of the path to the remote server - which I frankly feel 
is unlikely.

I really don't see that opening up multiple connections for downloading from 
a remote server is going to improve your network's performance, unless there 
is a deliverate throttle being placed in your path in order to share 
available bandwidth with other users, in which case trying to bypass it is 
almost certainly against the Acceptable Use Policy of the system you are 
connected through.

If the remote servers are on a round robin DNS, then they're already going to 
be nicely load balanced for different users each downloading complete files, 
so there's no point in creating additional connections for each server by 
only downloading part of the file from it.

I certainly can't see favourable support for this sort of thing getting 
included in Squid.

Regards,

Antony.

-- 

All matter in the Universe can be placed into one of two categories:

1. things which need to be fixed
2. things which will need to be fixed once you've had a few minutes to play 
with them


Re: [squid-users] squid + axel

2003-08-10 Thread Bob Arctor
you take it bad... 

1)if server is load balanced (have many aliases, and many ip numbers via many 
ISP's) this is the ONLY way to balance load across all off them, instead of 
overloading only one

2)it is not intended to use if you have only one ip number, and trafic 
shapers like shaperd prevent such programs from abusing slow links by not 
allowing multiple connections to same hosts.

3)it is _only_ way to balance traffic if you have multiple slow lines (like 
few dialups, like i do have)

it is not intended to use for ten T1 connections, to connect to server 
balanced on another ten T1's (altrough it would would work to load balance 
such traffic)

it is rather for downloading with reasonable speed on say, three dialups 
(i.e. using land-dialup, cellphone, and wireless link to quickly download 
something). if you'll ever visit poland you'll know what i mean :)

also, if server is load balanced on i.e. 10 slow links, it would allow you to 
download with greater speed from it . otherwise no matter how many links 
server owner on siberia will manage to get, your DSL will still suck big 
file from his page @ 2k/sec ;)
 

On Sunday 10 August 2003 23:24, Henrik Nordstrom wrote:
> On Sunday 10 August 2003 22.05, Bob Arctor wrote:
> > axel is an download 'accelerator'
> > originally it splits file to parts (equal) , opens local file , and
> > download it.
>
> This kind of things (download 'acceleration') is extremely unlikely to
> make it into Squid as the Squid developers oppose such use of HTTP
> and such anti-social abuse of the Internet resources in general.
>
> The traffic pattern of HTTP is bad as it is. The use of download
> accelerators makes it a horror, intentionally breaking others
> interactive sessions to try to make ones downloads faster.
>
> What we might add to Squid at some point in time is a 'download
> anti-accelerator' which detects the use of a download accelerators
> and makes the requests behave on the Internet like a single normal
> request to make the proxied traffic behave even if you have greedy
> anti-social users. There is however a few technical difficulties in
> doing this mainly related to HTTP protocol timing, but it can most
> likely be done without breaking the results of too many download
> accelerators.
>
> Regards
> Henrik

-- 
-- 


[squid-users] Re: [PATCH] external_acl and http_reply_access

2003-08-10 Thread Henrik Nordstrom
On Sunday 10 August 2003 22.17, Joshua Brindle wrote:

> enclosed trivial patch fixes this by checking for ch->reply and
> sending
> reply headers, and ch->request and sending request headers,
> I don't know if there is a situation where neither of these would
> be true,
> I hope not...

Should be using a different format tag I think. Sometimes there is 
overlap between the two, and in http_reply_access you have both kinds 
of headers..

Please register a Squid-3.0 feature request for this in the Squid 
bugzilla tool, making sure it does not get lost somewhere.

Regards
Henrik


Re: [squid-users] POST problems...

2003-08-10 Thread Henrik Nordstrom
On Thursday 07 August 2003 16.47, BERGOTTO Mario TECHTEL wrote:
> Hi everybody.
>
> I'm running squid 2.5 stable 3, and I have the following the
> problem.  Every time a user clicks on a 'POST' link, their browsers
> just sit there waiting...  Download works just great.

Are you inside a proxy firewall? If so make sure to read the Squid 
FAQ.


-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] squid + axel

2003-08-10 Thread Bob Arctor
axel is an download 'accelerator' 
originally it splits file to parts (equal) , opens local file , and download 
it.


On Sunday 10 August 2003 21:34, Henrik Nordstrom wrote:
> On Sunday 10 August 2003 20.39, Bob Arctor wrote:
> > i tried to modify axel.c to make it work as an cgi-bin script ,
> > and with squid rewriting url to point it to cgi-bin script, but
> > after a while of hacking i concluded it is pointless.
>
> What is axel?
>
> Regards
> Henrik

-- 
-- 


[squid-users] squid + axel

2003-08-10 Thread Bob Arctor
i tried to modify axel.c to make it work as an cgi-bin script , 
and with squid rewriting url to point it to cgi-bin script, but after a while 
of hacking i concluded it is pointless.

n axel.c main thread joins data flowing in from http connections on a 
filesystem. i would have to create an array where data would be joined , and 
then cat it to stdout to squid's input..

it would be much better if squid could just 'adopt' code from axel.c .

i gzipped my attempt to 
http://217.97.12.194/Grzybnia/sources/axel-1.0a-broken.tar.gz 

if anyone would be so kind and help me with merging it it would maybe work... 

i would store 'actual' downloads in squid's filesystem, with extra extension 
of conn.number... and try to join them in one file only if file doens't 
exceed maximum file size allowed to be stored.

-- 
-- 


Re: [squid-users] Squid3: vhost reverse proxy/accel bw extender

2003-08-10 Thread Henrik Nordstrom
On Friday 08 August 2003 02.35, Jim Flowers wrote:
> Yes, by definition name-based hosts use the same ip number but have
> different host.domain.tlds.  If I use only one cache_peer line, how
> do I configure more than one name-based virtual host on the server
> with that ip address?

It is all automatic unless you rewrite the host component of the 
request via a redirector or forcing the domain in cache_peer...

Which domains to send to which origin server is controlled by 
cache_peer_access.

A small example of a Squid-3 accelerator setup with one virtual host 
based port 80, forwarding 4 domains to 2 different servers (2 domains 
per server) with a default domain for old clients not supporting the 
host header.


http_port 80 accel vhost defaultsite=www.example.com

cache_peer 192.0.2.54 parent 80 0 no-query originserver name=vhost1
acl vhosts1_domains dstdomain www.example.com partners.example.com
cache_peer_access vhost1 allow vhost1_domains
http_access allow vhost1_domains

cache_peer 192.0.2.60 parent 80 0 no-query originserver name=vhost2
acl vhosts2_domains dstdomain bugs.example.com support.example.com
cache_peer_access vhost2 allow vhost2_domains
http_access allow vhost2_domains




If the domains to pass to the origin web servers is different from 
what the end-user requests in his web browser then the preferred 
solution is to fix the origin servers to accept the official domains 
requested by the end user. Alternatively you can use a redirector to 
rewrite the URL while it is forwarded by Squid but this approach will 
give you problems with most applications where the origin server will 
sometimes try to redirect or send the end-user to the domain the 
origin server thinks is the correct domain name for the server..

If you use the redirectror approach then the domains to use in 
http_access will be the external domains, while cache_peer_access 
uses the internal domains (after redirector rewriting of the URL).

Regards
Henrik


-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


RE: [squid-users] cache query when switching squid servers

2003-08-10 Thread Hermann Strassner
> i guess this question arose because i installed squid to conserve office
> bandwidth..
>
> and if i've got 1 gig of cache already... why download it again..!

Forget about it. Normally, after 2 or 3 days most of the cache is outdated
and must be loaded again, no matter how much space you have in your cache.

Hermann



Re: [squid-users] cache query when switching squid servers

2003-08-10 Thread Marc Elsen


Andrew Thomson wrote:
> 
> thanks for the comments..
> 
> i guess this question arose because i installed squid to conserve office
> bandwidth..
> 
> and if i've got 1 gig of cache already... why download it again..!
> 
 Debatable versus TOE (time of effort) , on the intended cache
 move operations.

 With my salary parameters :-)  :

 I have 1500 users, average cache access rate is 12Gb a day, hit rate
 is 50% , so I have 6Gb pure Internet allocated bw, each day anyway...

 M.


Re: [squid-users] Squid3: vhost reverse proxy/accel bw extender

2003-08-10 Thread Jim Flowers
Henrik,

Thanks very much for your clear explanation and illuminating example.

My problem has been understanding the terminology and particularly the 
term 'virtual host'.  I now think that in Squidish, 'virtual host' refers to 
a single ip address which  can respond to queries with the appropriate pages 
for multiple 'domains'.  Domains in this sense being the FQDN (e.g. 
www.example.com.

In the cache_peer directive option name=vhost1, 'vhost1' is just an 
identifier used by cache_peer_access to obtain the originserver ip address.

For name-based virtual hosting, the domains (FQDN) provided by the 
accelerator to the originserver are the same as those used in the query from 
the browser to the accelerator.

Now to learn about redirectors.  I understand your caution about the 
originserver providing redirection to the browser, however, as this usage is 
limited to demonstration of bandwidth extension and both the originserver and 
accelerator are available to the browser it should work ok -- I think.

So, I use a redirect program to rewrite the original query from 
http://www.domain2.com.domain1.com to http://www.domain2.com, apply 
http_access and cache_peer_access directives correctly and use an acl with 
www.domain2.com.domain1.com in it with redirector_access directive to limit 
the use of the redirect program. 

I think?

--
Jim Flowers<[EMAIL PROTECTED]>

-- Original Message ---
From: Henrik Nordstrom <[EMAIL PROTECTED]>

---

> It is all automatic unless you rewrite the host component of the 
> request via a redirector or forcing the domain in cache_peer...
> 
> Which domains to send to which origin server is controlled by 
> cache_peer_access.
> 
> A small example of a Squid-3 accelerator setup with one virtual host 
> based port 80, forwarding 4 domains to 2 different servers (2 
> domains per server) with a default domain for old clients not 
> supporting the host header.
> 
> http_port 80 accel vhost defaultsite=www.example.com
> 
> cache_peer 192.0.2.54 parent 80 0 no-query originserver name=vhost1
> acl vhosts1_domains dstdomain www.example.com partners.example.com
> cache_peer_access vhost1 allow vhost1_domains
> http_access allow vhost1_domains
> 
> cache_peer 192.0.2.60 parent 80 0 no-query originserver name=vhost2
> acl vhosts2_domains dstdomain bugs.example.com support.example.com
> cache_peer_access vhost2 allow vhost2_domains
> http_access allow vhost2_domains
> 
> If the domains to pass to the origin web servers is different from 
> what the end-user requests in his web browser then the preferred 
> solution is to fix the origin servers to accept the official domains 
> requested by the end user. Alternatively you can use a redirector to 
> rewrite the URL while it is forwarded by Squid but this approach 
> will give you problems with most applications where the origin 
> server will sometimes try to redirect or send the end-user to the 
> domain the origin server thinks is the correct domain name for the server..
> 
> If you use the redirectror approach then the domains to use in 
> http_access will be the external domains, while cache_peer_access 
> uses the internal domains (after redirector rewriting of the URL).
> 
> Regards
> Henrik



Re[2]: [squid-users] LAG !!!

2003-08-10 Thread squid_user
Hello Schelstraete,

Sunday, August 10, 2003, 3:08:21 PM, you wrote:

SB> squid_user wrote:

>>Hello everyone,
>>
>>I ve been useing squid for about 1 year. I didnt notice that earlyer
>>but last
>>time i found taht when i want to open some WWW pages then i have to
>>wait about sometimes 10-15 sec before browser show me something.
>>
>>Is that normal ? or maybe i should add something to squid.conf to
>>avoid this lagg... i dont know plz help me to solve that problem.
>>
>>When i turn off squid then web browsing works much more quick.
>>
>>will be thankful for any advice
>>
>>  
>>
SB> Maybe DNS problem on the Squid proxy server?



SB>Bart

I have just checked chache.log. Last lines looks like that:

2003/08/10 14:38:25| Performing DNS Tests...
2003/08/10 14:38:25| Successful DNS name lookup tests...
2003/08/10 14:38:25| DNS Socket created at 0.0.0.0, port 32784, FD 4
2003/08/10 14:38:25| Adding nameserver 194.204.152.34 from /etc/resolv.conf
2003/08/10 14:38:25| Adding nameserver 217.98.63.164 from /etc/resolv.conf

so i think its not a reason of DNS. DNS should be ok because when i
turn off squid then browsink work better.

I use version 2.5.STABLE2



  :(



[squid-users] external_acl and http_reply_access

2003-08-10 Thread Joshua Brindle
following the advice of rc I'm trying to implement an external_acl 
that will handle redirecting any page that comes back with an
X-Naughty header. I've been playing around with an external acl
and I always get data back if I use something like %LOGIN or %PATH
but I cannot get any header info back with %{header} . In the 
squid.conf it says "request header" but i figured that was just
an oversight of using external acl's in http_access but alas
it does not appear to be giving me reply headers back :(

Is this a known issue, I've been digging through source code
and the only relavent thing i've found is 

case _external_acl_format::EXT_ACL_HEADER:
sb = httpHeaderGetByName(&request->header,
format->header);
str = sb.buf();
break;

which specifically returns the request header, is there a way
to make this check which side of the request we are on, or
will a new type %{reply:header} or whatever need to be 
created? I'll play around with this a bit but I'd like the opinion of
the squid gurus

Joshua Brindle
UNIX Administrator
Southern Nazarene University


Re: [squid-users] CPU utilization performance issue

2003-08-10 Thread Henrik Nordstrom
On Saturday 09 August 2003 01.20, Adam Aube wrote:

> How can RAID0 have worse performance than RAID5? RAID0 was
> designed to optimize disk write performance by striping writes
> across multiple disks.

Because it does not add any redundancy or performance to a Squid 
setup. RAID0 only adds drawbacks compared to separate drives.

> I would think that RAID0 would at least outperform RAID1.

One cache_dir per drive gives maximal performance and flexibility for 
Squid but no automatic fault management. If you complement this with 
some software which removes the cache_dir from squid.conf and 
restarts Squid if a drive should fail then you do get quite 
acceptable level of fault recovery however.

RAID0 is on performance level close to using one cache_dir per drive, 
but has the drawback that if one drive fails the whole RAID0 set of 
drives is gone.

RAID1 adds redundancy, but nearly doubles the number of drives needed 
for the same performance compared to separate drives.

RAID5 is slower than RAID1 for Squid, but if you don't really need the 
speed but you mus have redundancy then it may be acceptable. However, 
given the low price of harddrives there is barely no motivation to 
use RAID5 instead of RAID1 for Squid.


My recommended base setup is a RAID1 setup for OS + logs, then 
separate drives for as many cache directories you need.  For low-end 
setups the RAID1 set may also be used for cache.

On high-end setups using RAID1 for the cache drives is recommended 
method of increasing the reliability and decreasing management cost 
at only the cost of doubles number of drives needed for cache.

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: Fw: [squid-users] Strange Log

2003-08-10 Thread Henrik Nordstrom
On Sat, 9 Aug 2003, Awie wrote:

> However, kernel panic only impact (happen) to Squid when read/write cache. I
> can upgrade kernel and compile some software very well. After kernel panic,
> there is kernel Oops messages and Linux seems OK, but Squid became very
> slow.

After a Kernel Oops all bets are off on kernel functionality and the box 
should be rebooted.

> Anyway, I don't mean that Squid facing this issue. After I downgrade the
> firmware, everything seems run so normal.

Excellent. Now report to the vendor that this new firmware seems to be 
broken with Linux.

Regards
Henrik



Re: [squid-users] external_acl and http_reply_access

2003-08-10 Thread Henrik Nordstrom
On Sunday 10 August 2003 10.31, Joshua Brindle wrote:

> X-Naughty header. I've been playing around with an external acl
> and I always get data back if I use something like %LOGIN or %PATH
> but I cannot get any header info back with %{header} . In the
> squid.conf it says "request header" but i figured that was just
> an oversight of using external acl's in http_access but alas
> it does not appear to be giving me reply headers back :(


There is no external acl method for accessing reply headers, only 
request headers.

Also, in squid-2.5 external acl methods is not suitable for use in 
http_reply_access as http_reply_access can not wait for any external 
lookups to complete. The latter is addressed in Squid-3 I think.


(access controls using reply headers must take place in 
http_reply_access, as they need access to the reply and http_access 
executes before the request is forwarded..)

Regards
Henrik

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] Strange Log

2003-08-10 Thread Henrik Nordstrom
On Friday 08 August 2003 05.20, Awie wrote:

> Thu Aug 7 16:10:10 2003.379 RELEASE -1 
> 6BF7EAEC6EEDE0ABAB7063E887CF6E9E ? ? ? ? ?/? ?/? ? ?
> Thu Aug 7 16:10:10 2003.379 RELEASE -1 
> 7A32312ABBE6E4C4BA9C60A712F5CA52 ? ? ? ? ?/? ?/? ? ?

These are farily normal store.log entries.

If you got these in another log file then your filesystem became 
corrupted in the kernel panic causing data intended for one file to 
show up in another (a not too uncommon thing to happen in such 
situations).

The kernel panic is an issue you need to get to the bottom of. The 
kernel panic is a OS/Hardware problem, not a Squid problem, and Squid 
can only be as reliable as the server it runs on.

Regards
Henrik

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


[squid-users] LAG !!!

2003-08-10 Thread squid_user
Hello everyone,

I ve been useing squid for about 1 year. I didnt notice that earlyer
but last
time i found taht when i want to open some WWW pages then i have to
wait about sometimes 10-15 sec before browser show me something.

Is that normal ? or maybe i should add something to squid.conf to
avoid this lagg... i dont know plz help me to solve that problem.

When i turn off squid then web browsing works much more quick.

will be thankful for any advice



my squid.conf:

http_port 3128
#ftp_port 3128
icp_port 0
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 64 MB
cache_dir ufs /cache 200 16 256
redirect_rewrites_host_header off
#replacement_policy GDSF
acl localnet src 192.168.1.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl Safe_ports port 80 443 210 119 70 21 1025-65535
acl CONNECT method CONNECT
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
http_access allow localnet
http_access allow localhost
http_access deny !Safe_ports
http_access deny CONNECT
http_access deny all
maximum_object_size 2000 KB
ipcache_size 1024
ipcache_low  90
ipcache_high 95
cache_mgr [EMAIL PROTECTED]
cache_effective_user squid
cache_effective_group squid
log_icp_queries off
cachemgr_passwd tajnehaselko all
buffered_logs on
positive_dns_ttl 6 hours



RE: [squid-users] CPU utilization performance issue

2003-08-10 Thread Adam Aube
> Can somebody explain to me why it's worth considering putting
> a Squid cache onto a Raid setup anyway?

RAID isn't just for precious data - it's to keep a disk failure
from taking down your system. Without RAID, if your cache disk
crashed, so would Squid.

Also, may systems come with all the disks setup in a RAID array.

That said, Squid causes major performance problems on RAID5,
which the most common form of RAID.

I'm not sure about RAID1 - I think there was a discussion about
using 2 mirrored RAID1 sets instead of RAID5, but I'm not sure.

Adam


[squid-users] Problems with the ncsa_auth

2003-08-10 Thread Antonio Lopez Mercader
Hello,
I have RedHat 9, Squid-2.5.Stable1-2 working fine and now I decided to use
user authentification with the ncsa module.
I have configured my system up to the point that the browers (tried with
Mozilla and Internet Explorer) ask for login&password but whatever I wrote,
it ask one more time (sometimes if keeped asking one and more time) and
inform about "The connection was refused when attempting to contact
www.webpage.com".
I supose it might be a problem related with the password file, but can't
guess what. All paths seem to be correct.

I installed Squid as an rpm package.
In squid.conf I have added:

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd

acl password proxy_auth REQUIRED

http_access allow password (before http_access deny all)

I have added some users with /usr/bin/htpasswd to the
/etc/squid/squid_passwd
I have tried all encryption modes without success.

There is a log file I could check for more info?


Antonio López Mercader
Ing. Tec. Informático
Hospital San Jorge - Huesca (SPAIN)




Re: [squid-users] CPU utilization performance issue

2003-08-10 Thread Antony Stone
On Friday 08 August 2003 7:26 pm, Schelstraete Bart wrote:

> 'Multiple cache disks', does that included hardware raids, because that
> are also 'multiple disks'.
> (but one disk for the OS).

Can somebody explain to me why it's worth considering putting a Squid cache 
onto a Raid setup anyway?

My thinking is:

Raid is for precious data where you don't want to lose anything because of 
hardware failure (which is reasonably likely with hard disks, depending on 
how long you wait).

A Squid cache is not 'precious data' but you want the access to it to be as 
fast as possible, and you want to get the maximum performance for your money, 
therefore you don't want to pay for an extra drive toget the redundancy of 
Raid.

Therefore, I cannot see any purpose at all in putting a Squid cache onto Raid 
- surely if you have multiple disks, it is better simply to create a File 
System on each, and put those into your squid.conf file as multiple 
"cache_dir"s?

Perhaps someone can enlighten me on this.

Regards,

Antony.

-- 

It is also possible that putting the birds in a laboratory setting
inadvertently renders them relatively incompetent.

 - Daniel C Dennett


Re: [squid-users] accounting

2003-08-10 Thread Henrik Nordstrom
On Tuesday 05 August 2003 09.57, Agri wrote:
> i'm trying to make accounting with squid
>
> squid logs into access.log number of bytes transmitted to a
> client... it's not enough for me i need to log number of bytes
> received from internet for a particular request. how to do that?
> :-)

By not allowing Squid to fetch more data than requested.

See quick_abort_* directives and range_offset_limit.

Regards
Henrik
-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] LDAP Auth and Squid Accelerator Mode

2003-08-10 Thread Henrik Nordstrom
On Friday 08 August 2003 15.48, [EMAIL PROTECTED] wrote:
> Hello all,
>
> In previous attempts at trying to get squid to work in accelerator
> mode with authentication I was unsuccessful.
>
> I found a patch that is suppose to do it here:
> www.poulpy.com/proj.php?PROJID=2

The only line which is correct in that patch is the change to enable 
AUTH_ON_ACCELERATION, but done at the wrong place.

The correct way to enable AUTH_ON_ACCELERATION is to add it to CFLAGS 
in src/Makefile after running configure.

The rest of the patch is not good, not needed or specific to his 
machine.

The reason why AUTH_ON_ACCELERATION is a little hidden like this is 
because it collides with people using Squid in interception 
proxy/cache mode, transparently intercepting port 80.


In Squid-3.0 this is cleaned up and there is no hidden defines needed 
to enable authentication in accelerator mode.

> The ldap auth module that comes with 2.5 STABLE 1, I was uable to
> get it to work.

Why?

> I did get the ldap module I downloaded from here to work:
> freshmeat.net/projects/squid_auth_ldap/?topic_id=90

Then you should be able to get the standard LDAP helper working as 
well.

squid_auth_ldap uses a search filter similar to 
"(&(uid=%s)(objectClass=Person))" I think. See the squid_ldap_auth 
man page for other examples of filters. Some knowledge of the 
structure of your LDAP directory helps a lot and ldapsearch is a good 
tool to gain such understanding if you do not know the LDAP directory 
structure already.

But as long as you have a helper working things are fine on that part 
however. But it should be noted that the squid_auth_ldap helper does 
not fully support Squid-2.5 last time I looked and you may get 
trouble from this helper if your users have strange characers in 
their login or password.

Regards
Henrik

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]