Re: [squid-users] speeding up browsing? any advice?!

2010-03-08 Thread Luis Daniel Lucio Quiroz
Le Dimanche 10 Mai 2009 03:01:14, Roland Roland a écrit :
 Hi All,
 
 users on my network have been complaining of slow browsing sessions for a
 while now..
 i'm trying to figure out ways to speed sessions up without necessarily
 upgrading my current bandwidth plan...
 i've thought about Squid, i've set it up on centOS and it's under testing.
 so my question is, an out of the box squid configuration would be enough to
 speed things up with it's caching options? or is there specific config to
 do so.. ?
 
 PS: i've done the minimal following config so far:
 
 - added an ACL defining my trusted subnet
 - allowed all access to this ACL
 
 i can browse the net and so on through squid, but am i actually caching? i
 check the caching directory and i see it's growing in size (as minimal as
 one user could cause it to do so) but while using wireshark, i see that for
 each browsing session i retrieve all Static objects from the net!  at the
 same time caching logs shows hit after another...
 am at a loss..!
 
 is that normal ?! what did i do wrong ? why am I retrieving static objects
 times and times again off the internet instead of squid's caching directory
 ?

I'm also reviewn how to seepd up mandriva's squid.  Here it is my config.  Any 
suggestion?

./configure --build=x86_64-mandriva-linux-gnu --prefix=/usr --exec-prefix=/usr 
--
bindir=/usr/sbin --sbindir=/usr/sbin --sysconfdir=/etc/squid --
datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --
libexecdir=/usr/lib64/squid --localstatedir=/var --sharedstatedir=/usr/com --
mandir=/usr/share/man --infodir=/usr/share/info --x-includes=/usr/include --x-
libraries=/usr/lib64 --enable-shared=yes --enable-static=no --enable-xmalloc-
statistics --enable-carp --enable-async-io --enable-
storeio=aufs,diskd,null,ufs --enable-disk-
io=AIO,Blocking,DiskDaemon,DiskThreads --enable-removal-policies=heap,lru --
enable-icmp --enable-delay-pools --disable-esi --enable-icap-client --enable-
useragent-log --enable-referer-log --enable-wccp --enable-wccpv2 --disable-
kill-parent-hack --enable-snmp --enable-cachemgr-hostname=localhost --enable-
arp-acl --enable-htcp --enable-ssl --enable-forw-via-db --enable-cache-digests 
--disable-poll --enable-epoll --enable-linux-netfilter --disable-ident-lookups 
--enable-default-hostsfile=/etc/hosts --enable-auth=basic,digest,negotiate,ntlm 
--enable-basic-auth-helpers=getpwnam,LDAP,MSNT,multi-domain-
NTLM,NCSA,PAM,SMB,YP,SASL,POP3,DB,squid_radius_auth --enable-ntlm-auth-
helpers=fakeauth,no_check,SMB --enable-negotiate-auth-helpers=squid_kerb_auth 
--enable-digest-auth-helpers=password,ldap,eDirectory --enable-external-acl-
helpers=ip_user,ldap_group,session,unix_group,wbinfo_group --with-default-
user=squid --with-pthreads --with-dl --with-openssl=/usr --with-large-files --
with-build-environment=default --with-filedescriptors=8192  
 


Re: [squid-users] speeding up browsing? any advice?!

2010-03-08 Thread Amos Jeffries
On Mon, 8 Mar 2010 11:38:02 -0600, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
 Le Dimanche 10 Mai 2009 03:01:14, Roland Roland a écrit :
 Hi All,
 
 users on my network have been complaining of slow browsing sessions for
a
 while now..
 i'm trying to figure out ways to speed sessions up without necessarily
 upgrading my current bandwidth plan...
 i've thought about Squid, i've set it up on centOS and it's under
 testing.
 so my question is, an out of the box squid configuration would be
enough
 to
 speed things up with it's caching options? or is there specific config
to
 do so.. ?
 
 PS: i've done the minimal following config so far:
 
 - added an ACL defining my trusted subnet
 - allowed all access to this ACL
 
 i can browse the net and so on through squid, but am i actually
caching?
 i
 check the caching directory and i see it's growing in size (as minimal
as
 one user could cause it to do so) but while using wireshark, i see that
 for
 each browsing session i retrieve all Static objects from the net!  at
 the
 same time caching logs shows hit after another...
 am at a loss..!
 
 is that normal ?! what did i do wrong ? why am I retrieving static
 objects
 times and times again off the internet instead of squid's caching
 directory
 ?

Check carefully what type of HIT they are.  TCP_HIT is fetched only from
cache, the others involve network fetches of some kind. REFRESH and
UNMODIFIED are squid sending out an IMS (if-modified-sice) request to check
fro newer content. REFRESH is when there _is_ newer content received whole
and the server provides the entire object back. UNMODIFIED is when
unchanged and cache copy sent back to the client, but a network check and
very small data is still required to be done to indentify if that is
possible.


Also, check that your config does not include QUERY acl and cache deny
QUERY. That prevents caching of any potentially dynamic content (even
though well designed sites permit caching of dynamic content).
 The badly designed sites still require a refresh_pattern:

  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
[placed just above the refresh_pattern .  line]


 
 I'm also reviewn how to seepd up mandriva's squid.  Here it is my
config. 
 Any 
 suggestion?
 
 ./configure --build=x86_64-mandriva-linux-gnu --prefix=/usr
 --exec-prefix=/usr --
 bindir=/usr/sbin --sbindir=/usr/sbin --sysconfdir=/etc/squid --
 datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --
 libexecdir=/usr/lib64/squid --localstatedir=/var
--sharedstatedir=/usr/com
 --
 mandir=/usr/share/man --infodir=/usr/share/info
--x-includes=/usr/include
 --x-
 libraries=/usr/lib64 --enable-shared=yes --enable-static=no
 --enable-xmalloc-
 statistics --enable-carp --enable-async-io --enable-
 storeio=aufs,diskd,null,ufs --enable-disk-
 io=AIO,Blocking,DiskDaemon,DiskThreads
--enable-removal-policies=heap,lru
 --
 enable-icmp --enable-delay-pools --disable-esi --enable-icap-client
 --enable-
 useragent-log --enable-referer-log --enable-wccp --enable-wccpv2
--disable-
 kill-parent-hack --enable-snmp --enable-cachemgr-hostname=localhost
 --enable-
 arp-acl --enable-htcp --enable-ssl --enable-forw-via-db
 --enable-cache-digests 
 --disable-poll --enable-epoll --enable-linux-netfilter
 --disable-ident-lookups 
 --enable-default-hostsfile=/etc/hosts
 --enable-auth=basic,digest,negotiate,ntlm 
 --enable-basic-auth-helpers=getpwnam,LDAP,MSNT,multi-domain-
 NTLM,NCSA,PAM,SMB,YP,SASL,POP3,DB,squid_radius_auth --enable-ntlm-auth-
 helpers=fakeauth,no_check,SMB
 --enable-negotiate-auth-helpers=squid_kerb_auth 
 --enable-digest-auth-helpers=password,ldap,eDirectory
 --enable-external-acl-
 helpers=ip_user,ldap_group,session,unix_group,wbinfo_group
--with-default-
 user=squid --with-pthreads --with-dl --with-openssl=/usr
 --with-large-files --
 with-build-environment=default --with-filedescriptors=8192

The referrer log, useragent log are mostly a waste. In the rare cases they
really are needed they can be replicated with logformat settings.

Speed in Squid is mostly driven by squid.conf and how much optimization we
have built into the code. The only configure settings that really affect
speed AFAIK are the disk IO methods (which you already have) and the select
loop methods, epoll and kqueue if available and working on your OS make
sure they are available as options. If they are available but not working
thats maybe a bug we need to look into.

The rest is up to the users configuration. For example admin with a
fixation on regex will immediately cut a large percentage off their speed. 

Amos


Re: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread Tim Bates

Roland Roland wrote:
but while using wireshark, i see that for each browsing session i 
retrieve all Static objects from the net!  at the same time caching 
logs shows hit after another...

is that normal ?!


I assume you are using Wireshark to watch traffic between your squid box 
and the internet... In which case, if you look at the replies for files 
you think are cached, they should have a status response of 304 (Not 
modified). This is perfectly normal... It's how Squid knows if it's got 
an up to date copy or not.


TB


RE: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread RoLaNd RoLaNd

oh ok, makes sense..

thanks for the clarification i appreciate that :)



though one more question if possible, is there anything i could
possibly do to speed up browsing aside what i mentioned earlier? 

keep in mind that i only added an allow ACL to my subnet... and that's it! is 
it enough?



should i change the size of caching objects or something like that?

i admit that these questions may sound silly, though am a newbie and trying to 
get my head around this...



thanks again..



 Date: Sun, 10 May 2009 19:10:20 +1000
 From: t...@new-life.org.au
 To: r_o_l_a_...@hotmail.com; squid-users@squid-cache.org
 Subject: Re: [squid-users] speeding up browsing? any advice?!
 
 Roland Roland wrote:
 but while using wireshark, i see that for each browsing session i 
 retrieve all Static objects from the net!  at the same time caching 
 logs shows hit after another...
 is that normal ?!
 
 I assume you are using Wireshark to watch traffic between your squid box 
 and the internet... In which case, if you look at the replies for files 
 you think are cached, they should have a status response of 304 (Not 
 modified). This is perfectly normal... It's how Squid knows if it's got 
 an up to date copy or not.
 
 TB

_
Drag n’ drop—Get easy photo sharing with Windows Live™ Photos.

http://www.microsoft.com/windows/windowslive/products/photos.aspx

Re: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread Gavin McCullagh
Hi,

On Sun, 10 May 2009, Roland Roland wrote:

 users on my network have been complaining of slow browsing sessions for a 
 while now..
 i'm trying to figure out ways to speed sessions up without necessarily  
 upgrading my current bandwidth plan...

Squid may help with this.  However, you don't seem to say that you have
determined the cause of the slowness yet.  One potential reason is your
users are saturating the available bandwidth.  Another however, is that you
have loss on a link somewhere.  Another might be your ISP over-contending
you or not giving you the bandwidth you expect.  Another might be slow DNS.  

Squid might indeed help in any or all of these situations.  However, I'd be
inclined to monitor the edge router device with MRTG or similar and track
exactly how much bandwidth is being used.  Also, I'd run smokeping across
the link to some upstream sites and see have you any packet loss.  If you
know the cause, you'll be better able to address the problem.

 though one more question if possible, is there anything i could
 possibly do to speed up browsing aside what i mentioned earlier? 
 
 keep in mind that i only added an allow ACL to my subnet... and that's
 it! is it enough?

For a start, you may want to look at increasing the cache_dir size.  The
default is 1GB which is pretty small.  The larger your cache, the larger
(albeit decreasingly) your hit rate will be. Once you have a large cache,
you probably want to increase maximum_object_size. If you want to save
bandwidth Heap LFUDA may be the best cache removal policy, as opposed to
LRU.  There might also be some sense in looking at delay pools to better
prioritise the bandwidth given to individual users.

Optimising squid's caching can be a big complicated job.

Gavin



Re: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread Amos Jeffries
 Hi,

 On Sun, 10 May 2009, Roland Roland wrote:

 users on my network have been complaining of slow browsing sessions for
 a
 while now..
 i'm trying to figure out ways to speed sessions up without necessarily
 upgrading my current bandwidth plan...

 Squid may help with this.  However, you don't seem to say that you have
 determined the cause of the slowness yet.  One potential reason is your
 users are saturating the available bandwidth.  Another however, is that
 you
 have loss on a link somewhere.  Another might be your ISP over-contending
 you or not giving you the bandwidth you expect.  Another might be slow
 DNS.

 Squid might indeed help in any or all of these situations.  However, I'd
 be
 inclined to monitor the edge router device with MRTG or similar and track
 exactly how much bandwidth is being used.  Also, I'd run smokeping across
 the link to some upstream sites and see have you any packet loss.  If you
 know the cause, you'll be better able to address the problem.

 though one more question if possible, is there anything i could
 possibly do to speed up browsing aside what i mentioned earlier?

 keep in mind that i only added an allow ACL to my subnet... and that's
 it! is it enough?

 For a start, you may want to look at increasing the cache_dir size.  The
 default is 1GB which is pretty small.

1GB? only on the newest squid. The slightly older ones more commonly used
have a measly 100MB.

Also update the dir type. teh default is ufs since thats the only portable
optimal types.

Linux gets quite a boost from changing to aufs.
FreeBSD and children get a big boost from changing to diskd.

On Squid-2 COSS is worth a try as a second dir for smaller objects.

  The larger your cache, the larger
 (albeit decreasingly) your hit rate will be. Once you have a large cache,
 you probably want to increase maximum_object_size. If you want to save
 bandwidth Heap LFUDA may be the best cache removal policy, as opposed to
 LRU.  There might also be some sense in looking at delay pools to better
 prioritise the bandwidth given to individual users.

 Optimising squid's caching can be a big complicated job.


... but taken step-by-step as an ongoing maintenance process its worth it ;)

Amos




RE: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread RoLaNd RoLaNd

hello and thanks for the prompt reply,

sorry for not mentioning earlier, though i continuously check my ISP's PRTG to 
notice that we have maxed out our allowed bandwidth.
which is a waste since we only browse specific sites that load specific static 
images that may or may not change in none less than a weekly period..
given that fact, squid would pretty much help me speed up browsing sessions 
(static images being loaded locally) as well as keeping bandwidth for other 
traffic that desperately needs it..




 Date: Mon, 11 May 2009 12:13:38 +1200
 From: squ...@treenet.co.nz
 To: gavin.mccull...@gcd.ie
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] speeding up browsing? any advice?!

 Hi,

 On Sun, 10 May 2009, Roland Roland wrote:

 users on my network have been complaining of slow browsing sessions for
 a
 while now..
 i'm trying to figure out ways to speed sessions up without necessarily
 upgrading my current bandwidth plan...

 Squid may help with this. However, you don't seem to say that you have
 determined the cause of the slowness yet. One potential reason is your
 users are saturating the available bandwidth. Another however, is that
 you
 have loss on a link somewhere. Another might be your ISP over-contending
 you or not giving you the bandwidth you expect. Another might be slow
 DNS.

 Squid might indeed help in any or all of these situations. However, I'd
 be
 inclined to monitor the edge router device with MRTG or similar and track
 exactly how much bandwidth is being used. Also, I'd run smokeping across
 the link to some upstream sites and see have you any packet loss. If you
 know the cause, you'll be better able to address the problem.

 though one more question if possible, is there anything i could
 possibly do to speed up browsing aside what i mentioned earlier?

 keep in mind that i only added an allow ACL to my subnet... and that's
 it! is it enough?

 For a start, you may want to look at increasing the cache_dir size. The
 default is 1GB which is pretty small.

 1GB? only on the newest squid. The slightly older ones more commonly used
 have a measly 100MB.

 Also update the dir type. teh default is ufs since thats the only portable
 optimal types.

 Linux gets quite a boost from changing to aufs.
 FreeBSD and children get a big boost from changing to diskd.

 On Squid-2 COSS is worth a try as a second dir for smaller objects.

 The larger your cache, the larger
 (albeit decreasingly) your hit rate will be. Once you have a large cache,
 you probably want to increase maximum_object_size. If you want to save
 bandwidth Heap LFUDA may be the best cache removal policy, as opposed to
 LRU. There might also be some sense in looking at delay pools to better
 prioritise the bandwidth given to individual users.

 Optimising squid's caching can be a big complicated job.


 ... but taken step-by-step as an ongoing maintenance process its worth it ;)

 Amos



_
Drag n’ drop—Get easy photo sharing with Windows Live™ Photos.

http://www.microsoft.com/windows/windowslive/products/photos.aspx

RE: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread RoLaNd RoLaNd

thanks for the advice, i just increased cache size to 300 GB (i have 1 Terra 
raided hdd so i dont mind the size) 
as for object size i've set it to 15 MB. though one question, i've read that 
there's a certain option that keeps cached objects in memory for quick 
retrieval..
i've got 6 GB of ram, so i dont mind doing so.. any advice? would it do good or 
.. ?

PS: i've started the delay pools yesterday i'll b testing it today to see if it 
works well.. 

once again thanks for the advice



 Date: Sun, 10 May 2009 23:20:31 +0100
 From: gavin.mccull...@gcd.ie
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] speeding up browsing? any advice?!

 Hi,

 On Sun, 10 May 2009, Roland Roland wrote:

 users on my network have been complaining of slow browsing sessions for a
 while now..
 i'm trying to figure out ways to speed sessions up without necessarily
 upgrading my current bandwidth plan...

 Squid may help with this. However, you don't seem to say that you have
 determined the cause of the slowness yet. One potential reason is your
 users are saturating the available bandwidth. Another however, is that you
 have loss on a link somewhere. Another might be your ISP over-contending
 you or not giving you the bandwidth you expect. Another might be slow DNS.

 Squid might indeed help in any or all of these situations. However, I'd be
 inclined to monitor the edge router device with MRTG or similar and track
 exactly how much bandwidth is being used. Also, I'd run smokeping across
 the link to some upstream sites and see have you any packet loss. If you
 know the cause, you'll be better able to address the problem.

 though one more question if possible, is there anything i could
 possibly do to speed up browsing aside what i mentioned earlier?

 keep in mind that i only added an allow ACL to my subnet... and that's
 it! is it enough?

 For a start, you may want to look at increasing the cache_dir size. The
 default is 1GB which is pretty small. The larger your cache, the larger
 (albeit decreasingly) your hit rate will be. Once you have a large cache,
 you probably want to increase maximum_object_size. If you want to save
 bandwidth Heap LFUDA may be the best cache removal policy, as opposed to
 LRU. There might also be some sense in looking at delay pools to better
 prioritise the bandwidth given to individual users.

 Optimising squid's caching can be a big complicated job.

 Gavin


_
Drag n’ drop—Get easy photo sharing with Windows Live™ Photos.

http://www.microsoft.com/windows/windowslive/products/photos.aspx

RE: [squid-users] speeding up browsing? any advice?!

2009-05-10 Thread Adam Carter
 thanks for the advice, i just increased cache size to 300 GB
 (i have 1 Terra raided hdd so i dont mind the size)
 as for object size i've set it to 15 MB. though one question,
 i've read that there's a certain option that keeps cached
 objects in memory for quick retrieval..

Usually the operating system does this for you, by caching some of the physical 
disks in RAM. For a forward proxy like you have setting a large cache_mem isnt 
recommended IIRC.

 i've got 6 GB of ram, so i dont mind doing so.. any advice?
 would it do good or .. ?

The more ram the better. The OS should use it as disk cache as I mentioned 
above. Are you using a 64bit OS (better) or a 32 bit OS with PAE? If your OS 
reports a lot less that 6 gig you'll want to fix that.

You might find running a cache only DNS server helps, as it should cut the 
lookup latency across the saturated link (tho it probably wont save much in the 
way for throughput).