[squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Carlos Lima
Hi List,

I've being testing and studying squid for almost two weeks now and I'm
getting no results. I already understood the problems related to http
headers where in most cases web servers administrators or programmers
are creating more and more dynamic data which is bad for caching. So,
I installed CentOS 5 along with 2.6.STABLE6 using yum install and set
only an ACL for my internal network. After that I set also
visible_hostname to localhost since quid was complaining about it.
Now, as I a stated already I read a lot regarding to squid including
some tips in order to optimize sda access or increasing memory size
limit but shouldn't squid be working great out-of-the-box?! Oh, I
forgot my problem is that on mysar that I installed in order to see
the performance I only see 0% of TRAFFIC CACHE PERCENT when already
visited almost 300 websites. In some ocassions I see 10% or even
30/40% but for almost of 98% of websites I get 0%.

So my questions are:
- Should Squid be taking only in consideration for large environments
with hundreds or even thousands of people accessing web?!
- In these days a proxy like Squid for caching purposes is more a
have to have or a must to have when for almost every site proxy's
are skipped and the wan speed access are increasing every day now!?

Thanks!

By the way:

I intend use Squid for caching purposes only since I already have
Cisco based QOS and bandwidth management. My deploying site as only at
most 5 people accessing web simultaneous under a 8Mb dsl connection.
My current config is:

http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 64 MB
maximum_object_size 40 MB
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl myNetwork src 10.10.1.0/255.255.255.0
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow myNetwork
http_access deny all
http_reply_access allow all
icp_access allow all
cache_effective_user squid
cache_effective_group squid
delay_pools 1
delay_class 1 1
delay_parameters 1 -1/-1
coredump_dir /var/spool/squid
visible_hostname localhost


Re: [squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Dieter Bloms
Hi,


On Mon, Dec 17, Carlos Lima wrote:

 So my questions are:
 - Should Squid be taking only in consideration for large environments
 with hundreds or even thousands of people accessing web?!

no, it can also be used in small enviroment.

 - In these days a proxy like Squid for caching purposes is more a
 have to have or a must to have when for almost every site proxy's
 are skipped and the wan speed access are increasing every day now!?

you can configure user-, time-, source-, or destination acl, and you
have a application gateway (it is more than only a packet filter like a
cisco firewall.

Btw.:
I think you should set cache_dir to some GB, to cache more than 100MB
of data to the disk cache, which is the default.
Please update to the last stable release, 2.6.STABLE6 isa little
outdated (from december last year). 


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


pgp2g6CZAWuj8.pgp
Description: PGP signature


Re: [squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Amos Jeffries
 Hi List,

 I've being testing and studying squid for almost two weeks now and I'm
 getting no results. I already understood the problems related to http
 headers where in most cases web servers administrators or programmers
 are creating more and more dynamic data which is bad for caching. So,
 I installed CentOS 5 along with 2.6.STABLE6 using yum install and set
 only an ACL for my internal network. After that I set also
 visible_hostname to localhost since quid was complaining about it.

Your DNS is broken silghtly. Any web-service mserver should have a FQDN
for its hostname. Many programs like squid use the hostname in their
connections outward, and many validate all connecting hosts before
accepting data traffic.

 Now, as I a stated already I read a lot regarding to squid including
 some tips in order to optimize sda access or increasing memory size
 limit but shouldn't squid be working great out-of-the-box?! Oh, I

It does ... for a generic 1998-era server.
To work these days the configuration is very site-specific.

 forgot my problem is that on mysar that I installed in order to see
 the performance I only see 0% of TRAFFIC CACHE PERCENT when already
 visited almost 300 websites. In some ocassions I see 10% or even
 30/40% but for almost of 98% of websites I get 0%.

The would be ones including '?' in the URI methinks.


 So my questions are:
 - Should Squid be taking only in consideration for large environments
 with hundreds or even thousands of people accessing web?!
 - In these days a proxy like Squid for caching purposes is more a
 have to have or a must to have when for almost every site proxy's
 are skipped and the wan speed access are increasing every day now!?

 Thanks!

 By the way:

 I intend use Squid for caching purposes only since I already have
 Cisco based QOS and bandwidth management. My deploying site as only at
 most 5 people accessing web simultaneous under a 8Mb dsl connection.

Well then as said earlier, you need more than 100MB of data cache, and
probably more than 64MB of RAM cache.

 My current config is:

 http_port 3128
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY

Right here you are non-caching a LOT of websites, some of which are
actually cachable.

We now recommend using 2.6STABLE17 with some new refresh_pattern set instead.

  refresh_pattern cgi-bin 0 0% 0
  refresh_pattern \? 0 0% 0
  refresh_pattern ^ftp: 1440 20% 10080
  refresh_pattern ^gopher: 1440 0% 1440
  refresh_pattern . 0 20% 4320


 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 cache_mem 64 MB
 maximum_object_size 40 MB

You will get at most 3 of these in the cache the way things are.
It will also skip most video and download content. To do bandwidth-saving
you should have gigs of disk available, and max object should be at least
720MB.

 access_log /var/log/squid/access.log squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl myNetwork src 10.10.1.0/255.255.255.0
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow myNetwork
 http_access deny all
 http_reply_access allow all
 icp_access allow all

Stand alone squid does not need ICP. Drop that.

 cache_effective_user squid
 cache_effective_group squid

These are better left to the OS. Slight misconfigurations here can really
screw your system security.

 delay_pools 1
 delay_class 1 1
 delay_parameters 1 -1/-1

These are useless. The delay_parameters effectivly say no pooling.

 coredump_dir /var/spool/squid
 visible_hostname localhost

This soulhd be a publicly accessible FQDN. It is the name squid connects
outbound with. If the machine is a server (likely) its hostname should be
a FQDN to communicate well with the Internet.


Amos




Re: [squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Manoj_Rajkarnikar

On Tue, 18 Dec 2007, Amos Jeffries wrote:


Hi List,

I've being testing and studying squid for almost two weeks now and I'm
getting no results. I already understood the problems related to http
headers where in most cases web servers administrators or programmers
are creating more and more dynamic data which is bad for caching. So,
I installed CentOS 5 along with 2.6.STABLE6 using yum install and set
only an ACL for my internal network. After that I set also
visible_hostname to localhost since quid was complaining about it.


Your DNS is broken silghtly. Any web-service mserver should have a FQDN
for its hostname. Many programs like squid use the hostname in their
connections outward, and many validate all connecting hosts before
accepting data traffic.


Now, as I a stated already I read a lot regarding to squid including
some tips in order to optimize sda access or increasing memory size
limit but shouldn't squid be working great out-of-the-box?! Oh, I


It does ... for a generic 1998-era server.
To work these days the configuration is very site-specific.


forgot my problem is that on mysar that I installed in order to see
the performance I only see 0% of TRAFFIC CACHE PERCENT when already
visited almost 300 websites. In some ocassions I see 10% or even
30/40% but for almost of 98% of websites I get 0%.


The would be ones including '?' in the URI methinks.



So my questions are:
- Should Squid be taking only in consideration for large environments
with hundreds or even thousands of people accessing web?!
- In these days a proxy like Squid for caching purposes is more a
have to have or a must to have when for almost every site proxy's
are skipped and the wan speed access are increasing every day now!?

Thanks!

By the way:

I intend use Squid for caching purposes only since I already have
Cisco based QOS and bandwidth management. My deploying site as only at
most 5 people accessing web simultaneous under a 8Mb dsl connection.


Well then as said earlier, you need more than 100MB of data cache, and
probably more than 64MB of RAM cache.


My current config is:

http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY


Right here you are non-caching a LOT of websites, some of which are
actually cachable.

We now recommend using 2.6STABLE17 with some new refresh_pattern set instead.

 refresh_pattern cgi-bin 0 0% 0
 refresh_pattern \? 0 0% 0
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440


also add these refresh_pattern lines here and see if it helps...

refresh_pattern -i \.exe$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.zip$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.tar\.gz$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.tgz$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mp3$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.ram$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.jpeg$  10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.gif$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.wav$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.avi$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mpeg$  10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mpg$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.pdf$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.ps$10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.Z$ 10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.doc$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.ppt$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.tiff$  10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.snd$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.jpe$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.midi$  10080   90%