[squid-users] Files 2Gb on FTP sites via Squid

2005-02-21 Thread davep
Running Squid-2.5.STABLE4 (plus security patches, Mandrake 10.0 
distribution).

When accessing an FTP site containing large files (typically DVD images) 
the file size is reported as negative. For example:

ftp://ftp.mirror.ac.uk/sites/fedora.redhat.com/3/x86_64/iso/
A command-line connection to the site reports the correct size.
Firefox refuses to download the DVD image due to the negative file size.
Is there any fix, or plans for one?
--
Dave
The information contained in this message (and any attachments) may 
be confidential and is intended for the sole use of the named addressee. 
Access, copying, alteration or re-use of the e-mail by anyone other 
than the intended recipient is unauthorised. If you are not the intended 
recipient please advise the sender immediately by returning the e-mail 
and deleting it from your system.

This information may be exempt from disclosure under Freedom Of Information 
Act 2000 and may be subject to exemption under other UK information 
legislation. Refer disclosure requests to the Information Officer

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__


Re: [squid-users] squid-2.5.STABLE8 compilation error

2005-02-21 Thread Joost de Heer
 /usr/bin/ld: cannot find -lz

zlib isn't installed.

Joost



Re: [squid-users] The Pushcache patch...

2005-02-21 Thread Marco Crucianelli
On Sun, 2005-02-20 at 00:54 +0100, Henrik Nordstrom wrote:
 On Wed, 9 Feb 2005, Marco Crucianelli wrote:
 
  Hi, I would like to have  (well, to be honest, I do really need it for
  the final work of my degree!) a *working* squid that does use
  pushcaching.
  I've been looking at the pushcache patch in
  http://devel.squid-cache.org/ but it seems to be a stale project! In
  fact, looking at the CVS brach with tag PUSH, it seems like the patch
  it's 2 years old! Do you know if there's anything new? Do any of you use
  a *working* pushcache squid using that patch? Where can I get more info
  on how does that patch work (besides the push.txt in the /doc of the CVS
  push branch?)
 
 Asking the authors of the pushcache patch may be worthwhile.. They do have 
 a version ready for download from their web site.
 
 http://www.pushcache.com/
 
 Regards
 Henrik

Thanks, I've already asked to Jon Kay! ;)


Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-21 Thread Marco Crucianelli
On Fri, 2005-02-18 at 16:10 -0200, H Matik wrote:
 On Friday 18 February 2005 08:34, Marco Crucianelli wrote:
  On Thu, 2005-02-17 at 12:52 -0600, Kevin wrote:
   What mechanism are you using to set expire times?
 
  Well, I'm still not sure what I shall use! I mean: should I use
  refresh_pattern!? Or what? I mean, refresh_pattern can let me change
  refresh period based on sire url right? What' else could I use?!
 
 
 when I suggested the choice of two caches, one for small objects and one for 
 large objects the focus was not on refresh patterns
 
 the goal here is you can use at very first priority the OS and especially the 
 HD System tuned for serving small or large files. This certainly will not be 
 possible running two squids on one machine
 
 the second point is that you can use max|min_object_size in order to limit 
 the 
 file size you will serve by each server. My experienced showed best results 
 breaking at 512K on modern PCs

Do you mean using max_object_size=512K for the small_object squid?

 
 third step is to use cache_replacement_policy LFUDA/GDSF accordingly and if 
 using diskd you may play with Q1 and Q2 what will give you the difference
 

Yes, I was thinking exactly of using GDS for the small_object squid,
while LFUDA for the big_file squid!

 and to make sense you push from the large_obj_cache with  proxy-only set
 

Right!

 to achieve this correctly you may or should set additional 
 always|never_direct 
 for known mpg, avi, wmv,mp3,iso and other so that the small_obj_cache pulls 
 them really from the larg_obj_cache
 
 Hans
 
 

I was thinking about using ACL regex_url to direct avi,mp3,iso etc from
the small_object (front-end) squid to the big_object (back-end) squid
together with the directive cache_peer_access...do you think I can do it
this way?

 
 
 
But, what about staleness? Can I set up the refresh time in squid...with
which directive?!?!


Once again, may thanks


Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, Marco Crucianelli wrote:
I was thinking about using ACL regex_url to direct avi,mp3,iso etc from
the small_object (front-end) squid to the big_object (back-end) squid
together with the directive cache_peer_access...do you think I can do it
this way?
Yes, but you may want to also use never_direct or the same.
But, what about staleness? Can I set up the refresh time in squid...with
which directive?!?!
refresh_pattern
Regards
Henrik


Re: [squid-users] retrieving data from cache

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, [iso-8859-2] Pntek Imre wrote:
2005. februr 21. 04.10 dtummal Henrik Nordstrom ezt rta:
If you have the store.log from when the file was stored then the file
number can be found there.
okay, I've got the store log.
I suppose this number is needed:
1A784FA9731651749D1A8C28C8C338C3
but I don't know how to decode it... Can you gelp me?
No, this is the store key. You need the file number.
http://www.squid-cache.org/Doc/FAQ/FAQ-6.html#ss6.4
The file number directly maps to the cache_dir file name. The number in 
front of it is your cache_dir (if you have more than one).

Regards
Henrik

Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-21 Thread H Matik
On Monday 21 February 2005 07:24, Marco Crucianelli wrote:
 On Fri, 2005-02-18 at 16:10 -0200, H Matik wrote:
  On Friday 18 February 2005 08:34, Marco Crucianelli wrote:
   On Thu, 2005-02-17 at 12:52 -0600, Kevin wrote:
What mechanism are you using to set expire times?
  

 Do you mean using max_object_size=512K for the small_object squid?

yes


 I was thinking about using ACL regex_url to direct avi,mp3,iso etc from
 the small_object (front-end) squid to the big_object (back-end) squid
 together with the directive cache_peer_access...do you think I can do it
 this way?


this should work, you add other extensions as you need 

acl bf urlpath_regex \.mpg
acl bf urlpath_regex \.avi
acl bf urlpath_regex \.wmv

never_direct allow bf
always_direct deny bf



 But, what about staleness? Can I set up the refresh time in squid...with
 which directive?!?!

you can use refresh_pattern


Hans



 Once again, may thanks

-- 
___
Infomatik
(18)8112.7007
http://info.matik.com.br
Mensagens não assinadas com GPG não são minhas.
Messages without GPG signature are not from me.
___


pgp9IqF7vAwwR.pgp
Description: PGP signature


[squid-users] squid and outlook web access

2005-02-21 Thread Guy Speier
Hello,

I am sorry for the multiple posts, but I am stuck on resolution for
this.

We have an internal (i.e. on our LAN) outlook web server that we are
trying to provide internet access to, via a squid server.

The internet client will browse to http://exchangeserver.domain.com:443

Our firewall will perform port address translation from 443 to 3128 and
direct traffic to our squid server.

Our squid server listens on 3128 (http) and then redirect to
webmail.domain.com (on port 443).

Here are my entries in squid.conf:
http_port 3128
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_reply_access allow all
httpd_accel_host webmail.firstlogic.com
httpd_accel_port  443

I would really appreciate help on this issue.

Thanks,
Guy


[squid-users] Is Squid FIPS certified?

2005-02-21 Thread Maya Zimerman
Hi,
Does anyone know if Squid is FIPS certified?
Thanks
Maya
_
FREE pop-up blocking with the new MSN Toolbar - get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/



Re: [squid-users] retrieving data from cache

2005-02-21 Thread Pntek Imre
at 2005. feb 21. 11.42 Henrik Nordstrom wrote this:
 http://www.squid-cache.org/Doc/FAQ/FAQ-6.html#ss6.4
okay, thanks.
-- 
With regards: Ifj. Pntek Imre
E-mail: [EMAIL PROTECTED]


Re: [squid-users] Is Squid FIPS certified?

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, Maya Zimerman wrote:
Does anyone know if Squid is FIPS certified?
FIPS certification does not apply to Squid as Squid is not a cryptographic 
module.

OpenSSL (the cryptographic module used by Squid for SSL accelerator 
support) is being FIPS certified however. For more information on OpenSSL 
FIPS certification see the following link:

  http://oss-institute.org/index.php?option=contenttask=viewid=39Itemid=
which also documents how to build your software (i.e. Squid) to comply 
with this pending FIPS certification.

Regards
Henrik


Re: [squid-users] squid and outlook web access

2005-02-21 Thread Henrik Nordstrom

On Mon, 21 Feb 2005, Guy Speier wrote:
Hello,
I am sorry for the multiple posts, but I am stuck on resolution for
this.
We have an internal (i.e. on our LAN) outlook web server that we are
trying to provide internet access to, via a squid server.
The internet client will browse to http://exchangeserver.domain.com:443
This looks odd.. http on port 443. Shouldn't this be https? (which 
requires https_port in squid.conf..)

Regards
Henrik


RE: [squid-users] squid and outlook web access

2005-02-21 Thread Guy Speier
Thank you Henrik,

I will recompile with --enable-ssl and change http_port to https_port.

I have two questions regaring this:
1) Do I need to have a certificate (for verisign or anywhere else)?
2) Is samba required for this configuration?

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 21, 2005 10:40 AM
To: Guy Speier
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid and outlook web access



On Mon, 21 Feb 2005, Guy Speier wrote:

 Hello,

 I am sorry for the multiple posts, but I am stuck on resolution for
 this.

 We have an internal (i.e. on our LAN) outlook web server that we are
 trying to provide internet access to, via a squid server.

 The internet client will browse to
http://exchangeserver.domain.com:443

This looks odd.. http on port 443. Shouldn't this be https? (which 
requires https_port in squid.conf..)

Regards
Henrik


Re: [squid-users] Squid, virtual IP and Layer 7 switching...any idea?

2005-02-21 Thread Marco Crucianelli
First of all, thanks Henrik!

The problem was mine. I mean: what I was thining of was a Layer 7
solution using virtual IP address, just to let the two squid asnwer to
the clients without passing back through the Layer 7 machine! In such a
case I do need virtual IP and there should surely be some things to
modify in squid.conf

Anyway, yes, using a simple Layer7 solution together with NAT I don't
need any virtual IP as well as any particular change in squid.conf!


Thanks!

Marco


Re: [squid-users] Squid, virtual IP and Layer 7 switching...any idea?

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, Marco Crucianelli wrote:
I mean: what I was thining of was a Layer 7 solution using virtual IP 
address, just to let the two squid asnwer to the clients without passing 
back through the Layer 7 machine! In such a case I do need virtual IP 
and there should surely be some things to modify in squid.conf
No, there is no things to modify in squid.conf when you use a virtual ip. 
Squid configuration is 100% the same as when using NAT.

The difference is in your OS IP configuration only. Not Squid.
Regards
Henrik


Re: [squid-users] Files 2Gb on FTP sites via Squid

2005-02-21 Thread Martin Joseph
I am a newbie so be warned!
On Feb 21, 2005, at 12:55 AM, davep wrote:
Running Squid-2.5.STABLE4 (plus security patches, Mandrake 10.0 
distribution).

When accessing an FTP site containing large files (typically DVD 
images) the file size is reported as negative. For example:
Squid is not an FTP proxy.  This issue is probably not a squid issue.
ftp://ftp.mirror.ac.uk/sites/fedora.redhat.com/3/x86_64/iso/
A command-line connection to the site reports the correct size.
Firefox refuses to download the DVD image due to the negative file 
size.
Perhaps this is a firefox issue?
Please take this with a bunch of salt.
Marty


[squid-users] compilation issue

2005-02-21 Thread Guy Speier
Hello,

I am trying to compile squid (STABLE8 on solaris9) after the following
configure:

/configure --enable-ssl --prefix=/usr/local/bin/squid
--with-openssl=/usr/local/ssl

And I get:
cc -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include
-I/usr/local/ssl/include  -g -c `test -f rfc2617.c || echo
'./'`rfc2617.c
../include/md5.h, line 18: #error: Cannot find OpenSSL headers
cc: acomp failed for rfc2617.c
*** Error code 2
make: Fatal error: Command failed for target `rfc2617.o'
Current working directory
/userdata/users/guys/squid/squid-2.5.STABLE8/lib
*** Error code 1
make: Fatal error: Command failed for target `all-recursive'

I have also tried --with-openssl=
/usr/local/ssl/include/openssl
and
/usr/local/ssl/include

And I do have the headers at:
/usr/local/ssl/include/openssl


Thank you for your help!
Guy



[squid-users] Strange reject of users (basic auth)

2005-02-21 Thread Janno de Wit
Hi folks,

I have a strange squid-behavor with basic authentication. 
Situation:
- I have 3 squid servers (2 dual xeon, 1 p3-1200).
Tonight Squid is stopped, some configuration files are updated, and
squid is started again. All with basic authentication via a Radius
helper.

Next day i fire up my browser and it keeps asking my password while
the radius helper started by hand says my password is okay.

It's not only on one squid-box, there are mornings 2 boxes have
problems.. strange enough that 3'rd box doesn't give errors (it's not
in de main LVS proxy array, but is a test/standby server).

What I do is a squid -k reconfigure on the box(es) giving problems...
and then everything is okay.

Does anybody have *any* idea where I can search this. I tried already
'squid -kdebug', but that gaves me a logfile of gigabytes within
minutes...

The problem occurs not only in the morning (= low hits/sec), I got it just 
10 minutes ago on a server under full load. 
After 10 minutes it suddenly 'repairs' and everything will work fine..

I noticed that after some times struggling with the password input
the problem occurs with new users (password not in squid cache-memory)

Some more info from cachemgr:
- 98% space of 128MB cachemem used
- no queue for the 32 basicauth helpers
- max used helpers ~= 3
- external helper working okay (using external helper for database
  lookups, works fine, it just gives problem when trying to authorize
  basic-auth clients).

Thanks for your info, Janno.

-- 
Janno de Wit
DNA Services B.V.


signature.asc
Description: Digital signature


[squid-users] RE: compilation issue

2005-02-21 Thread Guy Speier
By mosifying the file rfc2617.c and giving the absolute path to md5.g, I
get further, but still fail at;
Making all in fs
source='ufs/store_dir_ufs.c' object='ufs/store_dir_ufs.o' libtool=no \
depfile='.deps/ufs/store_dir_ufs.Po'
tmpdepfile='.deps/ufs/store_dir_ufs.TPo' \
depmode=none /bin/sh ../../cfgaux/depcomp \
cc -DHAVE_CONFIG_H -I. -I. -I../../include -I. -I../../include
-I../../include  -I../../src/  -I/usr/local/ssl/include  -g -c -o
ufs/store_dir_ufs.o `test -f ufs/store_dir_ufs.c || echo
'./'`ufs/store_dir_ufs.c
../../src/ssl_support.h, line 46: warning: old-style declaration or
incorrect type for: SSL_CTX
../../src/ssl_support.h, line 46: syntax error before or at: *
../../src/ssl_support.h, line 46: warning: old-style declaration or
incorrect type for: sslCreateContext
../../src/structs.h, line 816: syntax error before or at: SSL
cc: acomp failed for ufs/store_dir_ufs.c

Do you have any ideas of what could cause this?

-Original Message-
From: Guy Speier 
Sent: Monday, February 21, 2005 1:12 PM
To: squid-users@squid-cache.org
Subject: compilation issue

Hello,

I am trying to compile squid (STABLE8 on solaris9) after the following
configure:

/configure --enable-ssl --prefix=/usr/local/bin/squid
--with-openssl=/usr/local/ssl

And I get:
cc -DHAVE_CONFIG_H -I. -I. -I../include -I../include -I../include
-I/usr/local/ssl/include  -g -c `test -f rfc2617.c || echo
'./'`rfc2617.c
../include/md5.h, line 18: #error: Cannot find OpenSSL headers
cc: acomp failed for rfc2617.c
*** Error code 2
make: Fatal error: Command failed for target `rfc2617.o'
Current working directory
/userdata/users/guys/squid/squid-2.5.STABLE8/lib
*** Error code 1
make: Fatal error: Command failed for target `all-recursive'

I have also tried --with-openssl=
/usr/local/ssl/include/openssl
and
/usr/local/ssl/include

And I do have the headers at:
/usr/local/ssl/include/openssl


Thank you for your help!
Guy



Re: [squid-users] Strange reject of users (basic auth)

2005-02-21 Thread Janno de Wit
Janno de Wit ([EMAIL PROTECTED]) wrote:
 Hi folks,

Forgot some info: running Squid 2.5 stable 6

Janno.

-- 
Janno de Wit
DNA Services B.V.


signature.asc
Description: Digital signature


[squid-users] Best performance for squid

2005-02-21 Thread Carlos Eduardo Gomes Marins
Hi,
 
What is the best filesystem to use with Squid, ReiserFS or Ext3?
I've read in an article that ReiserFS is faster for small files and I've
read in some post from this list that it should be interesting that the
cache should use reiserFS. Is it really true? ReiserFS really outperform
ext3 when used for cache purposes?
Thanks.
 
Carlos Eduardo.



[squid-users] Re: Help me About Squid

2005-02-21 Thread Ric Lonsdale
From: Adam Aube [EMAIL PROTECTED]
Sent: Thursday, January 20, 2005 1:46 PM
Subject: [squid-users] Re: Help me About Squid

Oliver Hookins wrote:
Umar Draz wrote:

I have 512MB ram and 1100MB swap
now questions is this i have set 5GB /cache so what should be cache_mem

If you check out the archives you will see that the rule of thumb for disk
cache to memory cache is about 100:1 (if I remember correctly).
So  for a 5GB disk cache you should have 50MB of memory to handle it.

No, that's the rule of thumb for how much memory Squid will use to store
cache metadata, which is based purely on the size of the cache. The
cache_mem setting controls how much memory Squid will use to cache on-disk
objects in memory (which improves performance).

Generally the default for cache_mem does not need to be changed, because the
OS itself will use free memory to cache files. Items in Squid's cache that
are frequently or recently accessed will be included in this file cache.

If a memory shortage occurs, the OS can dump file cache to free up memory,
but memory used by Squid's cache_mem setting can only be recovered by
swapping out Squid, which will drastically hurt performance.

Adam

Following the thread above, I have 4Gb RAM, 11Gb for the cache and 5Gb for
swap.
I've set cache_mem to 2Gb as I too thought that the higher this field was
set to, the better the performance would be, as Squid would be able to store
more in memory, therfore requiring less retrievals from the disk.
Users are complaining about internet speed and I have been blaming WAN
bandwidth from their location up to the Squid servers and also the ISP link.
I'm now a bit worried that I've misinterpreted previous mailings on sizing
of cache_mem.
Can you clarify, as my swap is currently using 1Gb as well as all the RAM.
If I set it to 8Mb, are you saying that Squid will use the rest of my 4Gb
RAM as it sees fit?
If so why have this setting? I'm missing the point here I'm sure, so please
advise.

Thanks,
Ric



Re: [squid-users] Best performance for squid

2005-02-21 Thread Peter Lustig
Searching for Duane Wessels ppt brings up a comparison of different 
filesystems by Duane Wessels. It is readable with OpenOffice in Linux.

http://www.perl.org/tpc/2002/sessions/wessels_duane.ppt
I chose ReiserFS over ext2 though.
Carlos Eduardo Gomes Marins wrote:
Hi,
 
What is the best filesystem to use with Squid, ReiserFS or Ext3?
I've read in an article that ReiserFS is faster for small files and I've
read in some post from this list that it should be interesting that the
cache should use reiserFS. Is it really true? ReiserFS really outperform
ext3 when used for cache purposes?
Thanks.
 
Carlos Eduardo.





RE: [squid-users] squid and outlook web access

2005-02-21 Thread Guy Speier
While I was having trouble recompiling ssl, I decided to test a minimal
config:

From the internet on port 3128
Redirect to the squid box (still on 3128)
From the squid box to internal on port 80.

It looks like it was close, except I get this isn the log:
1109020150.264 36 69.179.44.23 TCP_MISS/404 1864 GET
http://exchangemail.firstlogic.com/exchange:80/ - DIRECT/X.X.X.X
text/html

So, why does it append the :80 on to the url for my exchange server?

When I test this from my LAN, I can access the URL only when the :80 is
not there.

Please help!
Guy



-Original Message-
From: Guy Speier 
Sent: Monday, February 21, 2005 10:48 AM
To: 'Henrik Nordstrom'
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] squid and outlook web access

Thank you Henrik,

I will recompile with --enable-ssl and change http_port to https_port.

I have two questions regaring this:
1) Do I need to have a certificate (for verisign or anywhere else)?
2) Is samba required for this configuration?

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 21, 2005 10:40 AM
To: Guy Speier
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid and outlook web access



On Mon, 21 Feb 2005, Guy Speier wrote:

 Hello,

 I am sorry for the multiple posts, but I am stuck on resolution for
 this.

 We have an internal (i.e. on our LAN) outlook web server that we are
 trying to provide internet access to, via a squid server.

 The internet client will browse to
http://exchangeserver.domain.com:443

This looks odd.. http on port 443. Shouldn't this be https? (which 
requires https_port in squid.conf..)

Regards
Henrik


Re: [squid-users] ACLs in a text file

2005-02-21 Thread James Gray
On Sun, 20 Feb 2005 07:18 am, [EMAIL PROTECTED] wrote:
 If I place my ACL definitions in a text file, and add URLs to the file
 during working hours, is it sufficient to just save the file for the new
 URLs to be allowed, or is it necessary to do something like rotating logs
 or restarting Squid?

killall -HUP squid

...but this will restart all child processes (like redirectors and auth 
modules).  It will be a clean reconfigure though.  The PID will stay the 
same and if you've reniced the parent squid process, all children will 
restart with the same niceness as the parent.  My squid boxes run with 
squid+children niced to -5 (to make sure squid gets priority over webalizer, 
apache and other fru-fru that mangelment want running on them to audit users' 
activities).

or as Joost suggested: sbin/squid -k reconfigure

Henrik: is there a major difference between sending a HUP signal or using -k 
reconfigure ???

Cheers,

James


Re: [squid-users] Problem with unparseable HTTP header field

2005-02-21 Thread James Gray
On Sat, 19 Feb 2005 12:45 am, Ralf Hildebrandt wrote:
 When I surf to http://www.abstractserver.de/da2005/avi/e/Abs_revi.htm
 and enter any number/character and click Submit my query, I get an
 error page (Invalid Response The HTTP Response message received from
 the contacted server could not be understood or was otherwise
 malformed. Please contact the site operator. Your cache administrator
 may be able to provide you with more details about the exact nature of
 the problem if needed. ) and in my cache.log:

I've been following this thread with interest as I experienced similar 
problems with a website after apt-get upgrade last week to 2.5S7 in Debian 
(Woody + backports.org). If it makes you feel any better, a local site is 
(was) pulling the same stunt: http://www.ht.com.au/

I called their IT people yesterday and gave them the details to which they 
were rather receptive.  Indeed, they had $CLUE.  They called me back in the 
afternoon to advise they had installed a patch at their end and wanted me to 
test it.  Working again - balance is restored and the good guys (us) win :)

Disclaimer: I don't go to that site often, but it was working yesterday.

Cheers,

James


Re: [squid-users] Re: Help me About Squid

2005-02-21 Thread Kevin
On Mon, 21 Feb 2005 21:00:37 -, Ric Lonsdale
[EMAIL PROTECTED] wrote:
 Following the thread above, I have 4Gb RAM, 11Gb for the cache and 5Gb for
 swap.

You'll want to keep an eye on swap usage and paging activity (vmstat).
Some operating systems (Solaris) will inherently always have swap in use,
and tools like top will always show a few megs swap in use, other
systems (OpenBSD)  will show 0K swap used until main memory runs low.

Additionally, Solaris is inherently designed to take advantage of all
available memory, RAM that isn't allocated to any process will be occupied
by file cache, and Solaris will go out of it's way to swap idle pages out of
main memory to make more space available for applications and low-level
OS caches (file data, inodes, etc).


 I've set cache_mem to 2Gb as I too thought that the higher this field was
 set to, the better the performance would be, as Squid would be able to store
 more in memory, therfore requiring less retrievals from the disk.

What is tuned by cache_mem is how much RAM (or swap, in degenerate
cases) is *explicitly* allocated to squid for caching objects in memory. 
This is distinct from the memory used for the squid runtime and the 
metadata overhead memory dynamically allocated to the process --
TMK there is no way to control or limit the RAM squid uses for metadata.

IMHO, you always want to set cache_mem low enough that the system
doesn't run low on free pages and start to swap unnecessarily.  On a 
dedicated squid cache machine (no other services running), you don't
want the OS to ever be forced to move live pages from main memory
to disk and back again, as this significantly degrades performance, much
more than if squid were forced to go to disk to retrieve cached content.


 Users are complaining about internet speed and I have been blaming WAN
 bandwidth from their location up to the Squid servers and also the ISP link.
 I'm now a bit worried that I've misinterpreted previous mailings on sizing
 of cache_mem.
 Can you clarify, as my swap is currently using 1Gb as well as all the RAM.
 If I set it to 8Mb, are you saying that Squid will use the rest of my 4Gb
 RAM as it sees fit?

Not exactly.
If you set cache_mem to 8 megabytes, squid will only keep 8 megs worth of
objects in memory, plus consume some additional RAM due to metadata,
ipcache/fqdncache, and other overhead.

However the OS will use the rest of my 4Gb RAM is it sees fit.  How much
of the free memory will be allocated towards filesystem metadata, how much
to caching disk files (including frequently accessed squid cache objects) and
how much to other uses is entirely determined by your OS.


 If so why have this setting?
 I'm missing the point here I'm sure, so please advise.

The idea behind cache_mem is that you don't know for sure how your kernel
is going to choose to make use of free system RAM, so you instead choose
to have squid itself explicitly store small frequently accessed objects in
memory (or swap, if things go sour).

I haven't done any benchmarking, but I'd venture that even on an OS such as
Solaris where free memory is actually used as file cache, squid is still
significantly more efficient when returning objects directly from cache_mem
rather than using OS system calls to read (OS cached) files from disk.

Kevin Kadow


Re: [squid-users] Strange reject of users (basic auth)

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, Janno de Wit wrote:
What I do is a squid -k reconfigure on the box(es) giving problems...
and then everything is okay.
Sounds like the basic auth helper you are using had a hickup..
Is there anything in cache.log which may hint to why?
Regards
Henrik


Re: [squid-users] ACLs in a text file

2005-02-21 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, James Gray wrote:
Henrik: is there a major difference between sending a HUP signal or using -k
reconfigure ???
None really.
With -k you don't need to figure out which pid to send the signal to as 
Squid does this for you..

Regards
Henrik


Re: [squid-users] Re: Help me About Squid

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, Ric Lonsdale wrote:
If so why have this setting? I'm missing the point here I'm sure, so please
advise.
The cache_mem setting is mainly of benefit when Squid is used as a reverse 
proxy. Here the whole set of files served by the proxy may fit in the 
cache_mem in an very efficient manner and there is practically no need for 
disk I/O at all.

In normal Internet proxies this never happens, and cache_mem is better 
left close to the default setting to allow the OS to make efficient use of 
the RAM to speed up disk I/O buffers etc.

Regards
Henrik


Re: [squid-users] Files 2Gb on FTP sites via Squid

2005-02-21 Thread Henrik Nordstrom
On Mon, 21 Feb 2005, davep wrote:
Running Squid-2.5.STABLE4 (plus security patches, Mandrake 10.0 
distribution).

When accessing an FTP site containing large files (typically DVD images) the 
file size is reported as negative. For example:
Indeed. The famous 2 GB barrier of 32-bit systems. (2^31-1 == 2GB - 1, +1 
and it becomes -2GB)

Is there any fix, or plans for one?
The goal is that Squid-3.0 will handle this right on 64-bit platforms or 
when --enable-large-files is used on the supported 32-bit platforms, but 
there still remains some cleanup to ensure file sizes are never type 
casted to int either explicitly or implicitly by the compiler.

It is a lot more to it than only fixing the FTP directory listings. There 
is very many assumptions in Squid that the object size in bytes fits 
within one int (CPU default machine word, usually 32 bits).

Regards
Henrik


Re: [squid-users] Files 2Gb on FTP sites via Squid

2005-02-21 Thread Henrik Nordstrom

On Mon, 21 Feb 2005, Martin Joseph wrote:
When accessing an FTP site containing large files (typically DVD images) 
the file size is reported as negative. For example:
Squid is not an FTP proxy.  This issue is probably not a squid issue.
Squid is a FTP proxy for HTTP clients such as most web browsers.
It is an Squid issue.
Regards
Henrik


RE: [squid-users] squid and outlook web access

2005-02-21 Thread Merton Campbell Crockett
All you want to do is have squid function as a relay for your OWA system.  
If it is now recording :80, squid has been configured to be the terminus 
of the SSL/TLS tunnel.  If you want to do this, you will need to modify 
the HTTP header to inform OWA that you are encrypting the traffic at your 
network border if you are using a version of Exchange newer than 5.5.

Merton Campbell Crockett



On Mon, 21 Feb 2005, Guy Speier wrote:

 While I was having trouble recompiling ssl, I decided to test a minimal
 config:
 
 From the internet on port 3128
 Redirect to the squid box (still on 3128)
 From the squid box to internal on port 80.
 
 It looks like it was close, except I get this isn the log:
 1109020150.264 36 69.179.44.23 TCP_MISS/404 1864 GET
 http://exchangemail.firstlogic.com/exchange:80/ - DIRECT/X.X.X.X
 text/html
 
 So, why does it append the :80 on to the url for my exchange server?
 
 When I test this from my LAN, I can access the URL only when the :80 is
 not there.
 
 Please help!
 Guy
 
 
 
 -Original Message-
 From: Guy Speier 
 Sent: Monday, February 21, 2005 10:48 AM
 To: 'Henrik Nordstrom'
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] squid and outlook web access
 
 Thank you Henrik,
 
 I will recompile with --enable-ssl and change http_port to https_port.
 
 I have two questions regaring this:
 1) Do I need to have a certificate (for verisign or anywhere else)?
 2) Is samba required for this configuration?
 
 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
 Sent: Monday, February 21, 2005 10:40 AM
 To: Guy Speier
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid and outlook web access
 
 
 
 On Mon, 21 Feb 2005, Guy Speier wrote:
 
  Hello,
 
  I am sorry for the multiple posts, but I am stuck on resolution for
  this.
 
  We have an internal (i.e. on our LAN) outlook web server that we are
  trying to provide internet access to, via a squid server.
 
  The internet client will browse to
 http://exchangeserver.domain.com:443
 
 This looks odd.. http on port 443. Shouldn't this be https? (which 
 requires https_port in squid.conf..)
 
 Regards
 Henrik
 

-- 
BEGIN:  vcard
VERSION:3.0
FN: Merton Campbell Crockett
ORG:General Dynamics Advanced Information Systems;
Intelligence and Exploitation Systems
N:  Crockett;Merton;Campbell
EMAIL;TYPE=internet:[EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:   +1(805)497-5045
TEL;TYPE=work,fax:  +1(805)497-5050
TEL;TYPE=cell,voice,msg:+1(805)377-6762
END:vcard


[squid-users] Squid 3 w/ESI

2005-02-21 Thread Jon
Hi everyone,

I was experimenting with the latest build of Squid 3 with ESI.  I ran the
configure file with the following options --prefix=/usr/local/squid3
--enable-gnuregex --with-pthreads --enable-esi --enable-storeio=ufs,aufs
--with-aufs-threads=10 --enable-useragent-log --enable-referer-log
-enable-ssl --enable-x-accelerator-vary --with-dl.  It compiles fine except
when I run make, I received the following errors.

/usr/bin/ld: cannot find -lexpat
*** Error code 1

Stop in /usr/home/username/squid-3.0-PRE3-20050218/src.
*** Error code 1

I was reading a tutorial but they seem to have it installed ok.  Can anyone
point me towards the right direction?

Thank you,

Jon



Re: [squid-users] Problem with unparseable HTTP header field

2005-02-21 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Reuben Farrelly wrote:
Want to let them know that they are still sending too many keep-alive 
headers?
Fortunately for Squid this specific error is harmless to all versions of 
Squid and completely ignored, and also implicitly cleaned up when the 
message is forwarded by Squid due to the nature of the HTTP protocol.

More generally, how on earth does software manage to screw up something 
simple like an HTTP header so much?
Mostly because noone has cared very much about the integrity or quality of 
HTTP headers in the past I guess. This is likely to start changing real 
soon I hope.

Is it usually an application that just appends duplicate headers to what 
was an existing and legitimate header from a web server?
Applications are not supposed to set hop-by-hop headers such as 
keep-alive.. this is purely the business of the web server core.

Or are there known-to-be broken versions of applications out 
there?
Plenty, as have been seen in the last weeks here on squid-users..
If you want to see how bad the situation really is then set
  relaxed_header_parser off
in your squid.conf and watch several major web sites fall down..
Regards
Henrik


Re: [squid-users] Squid 3 w/ESI

2005-02-21 Thread Henrik Nordstrom

On Mon, 21 Feb 2005, Jon wrote:
Hi everyone,
I was experimenting with the latest build of Squid 3 with ESI.  I ran the
configure file with the following options --prefix=/usr/local/squid3
--enable-gnuregex --with-pthreads --enable-esi --enable-storeio=ufs,aufs
--with-aufs-threads=10 --enable-useragent-log --enable-referer-log
-enable-ssl --enable-x-accelerator-vary --with-dl.  It compiles fine except
when I run make, I received the following errors.
/usr/bin/ld: cannot find -lexpat
You are missing the expat SGML parser library, or at least it's 
development package.

Regards
Henrik


[squid-users] Thomas Werner is out of the office.

2005-02-21 Thread Thomas Werner
I will be out of the office starting  21.02.2005 and will not return until
28.02.2005.

During that time, I will have limited or no access to my email inbox. For
emergencies, I ask you kindly to contact Mark Grace at +49 (0)30 21231-1080
or [EMAIL PROTECTED]

Thank you and kind regards,
Thomas Werner

[squid-users] squid with ldirectord-----------------------------------

2005-02-21 Thread SUKHWINDER PAL
hello sir,
we are a group of students working on a linux
clusturing project.We had implemented Linux
VirtualServer(LVS) with two squid servers
(realservers) with LVS-DR method.Please help us as our
project is almost getting successful.
Now we are trying ldirectord to detect failure of
squid servers.We  are using following:-
On LVS
VIP--10.11.151.24
DIP---10.11.150.98

 On Realserver1 (squid
server)-
RIP---10.11.150.82
VIP---10.11.151.24

On Realserver2 (squid
server)---
RIP--10.11.150.96
VIP--10.11.151.24
As we know that in case of LVS-DR the realserver is
directly serving the client and not through the
LVS.So, how the ldirectord on LVS will detect the
failure of squid servers.
Then we tried the martian modification patch
forward_shared- on the director and issued the
following commands:-
echo 1  /proc/sys/net/ipv4/conf/all/forward_shared
echo 1  /proc/sys/net/ipv4/conf/DIP/forward_shared
Then we made the default gateway of squid servers as
DIP.
After that we edited the file /etc/ha.d/ldirectord.cf
as follows:-
quiscent=no
virtual=10.11.151.24:3128
real=10.11.150.82 :3128 gate
real=10.11.150.96:3128 gate
scheduler=wrr
persistence=600
protocol=tcp
As the service squid was not there in the man page
of ldirectord we had not defined the service field.
Now we started all the services like ipvsadm
,ldirectord ,squid on LVS  realservers.But whenever
any of the squid server is going down ,its 
entry is not removed from the ipvsadm table.Also the
client ( which was connected to failed server) is not
getting connected to the working squid server
automatically.
What could be the problem ,sir


Yahoo! India Matrimony: Find your life partner online
Go to: http://yahoo.shaadi.com/india-matrimony


[squid-users] Re: Improving squid-performance

2005-02-21 Thread Ow Mun Heng
On Tue, 2005-02-22 at 08:13, Stefan Neufeind wrote:
 Hi,
 
 I have a squid running on a machine, in front of two webservers, running
 as a load-balancer and cache at the same time. It works really fine.

Reverse Proxy?

 However, I'm facing performance-problems (almost 100% cpu user from time
 to time). 

Did you ever determine what the bottleneck was in the 1st place? 

 So I thought how performance can be improved. Okay, increasing
 mem (currently 1GB) might help. But what other options are available?

memory is one option but I doubt it will help unless you know why it's
using 100% CPU. I'm certain you're not running any sort of
antivirus/filtering app right? 

And you're only using the squid box as a load-balancer (reverse proxy)
for the 2 webservers or are you also using it for internal web surfing
cache? Can you determine if the load is caused by internal or external
users?


 
 Onn the Fedora-mailinglist I found your message:
 https://www.redhat.com/archives/fedora-list/2004-November/msg04242.html
 Were there eany replies to this (which I didn't notice)? Did you find
 any good howto's? What steps did you take?

I'm sorry but w/o internet access I don't know what message that is. Can
you provide me with at least the subject and the approx date? (I have it
archived in my laptop)

 
 Running Fedora FC3 on that machine. Are file descriptors still a problem?

Not really. But the FC3 box which I did the install was for a home
machine for only 3 users. The other install is on a FC2 box.It's working
fine w/ 4096 descriptors.

 What does this diskd do? I didn't find much information about it. Only:
 http://www.squid-cache.org/Doc/FAQ/FAQ-22.html

Diskd is just another cache filesystem like aufs. I can't tell you more
than that. But diskd is supposed to function the best on *BSD systems.
on Linux, aufs is the better choice. (AFAIK)

 
 Do the per-client-stats consume _that_ much power?

Power? Not really, only system overheads due to the writes it makes to
the disk. Besides that, depending on what you log, your logs may become
PHAT quite quickly and you'll soon have another problem. :-D

it actually depends on how much you're willing to sacrifice in terms of
performance. I would believe that at times, it is useful esp when
debugging problems. But other than that

 
 Do you know if ext3 or reiserfs is preferred (refering to performance)
 for the cache-directories, or if there is maybe a way for optimised
 performance using raw partitions (like Oracle does)?

I initially though that reiserfs is the way to go for small files. But
after reading the Oreilly book - Squid = the Definitive Guide, I was
surprised that ext3 actually performs better.

Raw Partitions? I don't now about that. Maybe someone on the list would
know??


 
 
 Your feedback would be _very_ much appreciated.
 
 
 Kind regards,
  Stefan

--
Ow Mun Heng
Gentoo/Linux on DELL D600 1.4Ghz 
98% Microsoft(tm) Free!! 
Neuromancer 13:52:49 up 4:40, 5 users, 
load average: 0.42, 0.49, 0.22 



Re: [squid-users] :Direct connection without DNS lookup

2005-02-21 Thread Matus UHLAR - fantomas
On 22.02 11:59, [EMAIL PROTECTED] wrote:
 Is their a a methood wher you can tell squid to directly connect to a
 specific web site by providing its IP addredd without DNS lookups.

of course: look at 'dst' acl type and 'always_direct' directive
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.


Re: [squid-users] Files 2Gb on FTP sites via Squid

2005-02-21 Thread davep
Henrik Nordstrom wrote:
On Mon, 21 Feb 2005, davep wrote:

It is a lot more to it than only fixing the FTP directory listings. 
There is very many assumptions in Squid that the object size in bytes 
fits within one int (CPU default machine word, usually 32 bits).
Right. I looked at ftp.c and there are several occurrences of
int size;
in there, plus size_t is 32-bits on x86 Linux.
I'll live with this one for a while. Thanks.
--
Dave
The information contained in this message (and any attachments) may 
be confidential and is intended for the sole use of the named addressee. 
Access, copying, alteration or re-use of the e-mail by anyone other 
than the intended recipient is unauthorised. If you are not the intended 
recipient please advise the sender immediately by returning the e-mail 
and deleting it from your system.

This information may be exempt from disclosure under Freedom Of Information 
Act 2000 and may be subject to exemption under other UK information 
legislation. Refer disclosure requests to the Information Officer

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__