[squid-users] ignoring no-cache pragma's for certain URLs?

2007-10-19 Thread Travis Derouin
Hi,

We run Squid in reverse proxy and I'm finding that our cache hit ratio
for our RSS feed is quite low, only at about 90%. I notice from the
logs that there are several TCP_CLIENT_REFRESH_MISS statements in the
logs from certain clients, such as Google Desktop and
Windows-RSS-Platform.

Is there a way to force Squid to to ignore clients who issue a
no-cache pragma for certain URLs? Our RSS feed only changes a few
times a day.

Thanks,
Travis


Re: [squid-users] url_rewrite_program and Squid 2.6STABLE

2007-01-17 Thread Travis Derouin

Hi Henrik,

Great. Thanks, that helps a lot.

It looks like the best way to send all URLs that aren't for
www.wikihow.com to the url_rewrite_program is to do this:

acl mydomain dstdomain www.wikihow.com
url_rewrite_access deny mydomain

This seems to do the trick. It wasn't clear to me from the
documentation that it was possible to put allow or deny after
url_rewrite_access.

Thanks a lot for the help!
Travis


[squid-users] url_rewrite_program and Squid 2.6STABLE

2007-01-16 Thread Travis Derouin

Hi,

I have a few issues that I thought someone might have some advice on.

I read in the documentation that url_rewrite_program was no longer
required as it was possible to force a domain name, ie. if Squid
received a request for  www1.wikihow.com, you could issue a 301
redirect to www.wikihow.com. I can't seem to configure this out
properly, as URLs not on  www.wikihow.com don't get redirected. This
is my config:

http_port 80 defaultsite=www.wikihow.com vhost

Is that correct, or should it be something else?

My other issue is with the url_rewrite_program, it seems I have too
few url_rewrite_children, as Squid  2.6.STABLE6 keeps giving Fatal
errors:

FATAL: Too many queued url_rewriter requests (72 on 12)

This didn't seem to be a problem on our other server which was running
2.6.STABLE3-20060825, which we just moved off of recently. The new
server is currently only getting half of the total traffic the old
server was receiving.

2 questions: Could this be a problem particular to  2.6STABLE6? Are
there any serious drawbacks to having a lot of url_rewrite_children ?
Or do I only need to increment it to 20-40 to handle all of  the
requests.

Ideally I'd like a configuration that doesn't rely on
url_rewrite_program, I haven't figured out the configuration for that
yet though.

Any help would be appreciated.

Travis



Travis


Re: [squid-users] url_rewrite_program and Squid 2.6STABLE

2007-01-16 Thread Travis Derouin

Hi,

Thanks for the info.

This is what our rewrite.pl script looks like:

$|=1;
while () {
@X = split;
$url = $X[0];
my @servers = qw(10.234.169.206 10.234.169.205 10.234.169.196 );
   if ($url !~ /^http:\/\/www\.wikihow\.com/) {
   $_ = $url;
   s/^http:\/\/(.*?)\/(.*)/http:\/\/www.wikihow.com\/\2/;
   print 301:$_\n;
   } else {
   my $server = $servers[int(rand($#servers+1))];
   s/^http:\/\/www.wikihow.com/http:\/\/$server\2/;
   print $_;
   }
}

Where the 10.xxx servers are our back-end apache servers (we're using
Squid for load balancing and caching). It basically just checks that
all requests are for pages on the www.wikihow.com domain, if not, it
301 redirects them to the same requested page on www.wikihow.com. We
do this because we used to host wikihow on the wiki.ehow.com
subdomain, and have since moved it over and it's important we 301
redirect old URLs to their new www.wikihow.com domain for SEO
purposes.

Is a url_rewrite program still needed to do this? If so, how can I
make it concurrent? If not, how can I configure squid to issue the 301
redirects for pages requested from the other domains?

It seems specifying a deny_info URL will send browsers a 302 URL, it's
essential we send them a 301 redirect, in addition it's essential that
requests for www1.wikihow.com/page2 get 301 redirected to their
counterpart www.wikihow.com/page2.

I'm not sure why this version of Squid is running out of rewrite
children, the only differences between this installation and the other
one is that we are using epoll and it's on a 64 bit processor. I'm not
sure if this affects anything.  How much memory usage do the helper
instances take up?

Thanks!
Travis

On 1/16/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:

tis 2007-01-16 klockan 20:20 -0500 skrev Travis Derouin:
 Hi,

 I have a few issues that I thought someone might have some advice on.

 I read in the documentation that url_rewrite_program was no longer
 required as it was possible to force a domain name, ie. if Squid
 received a request for  www1.wikihow.com, you could issue a 301
 redirect to www.wikihow.com.

Yes. This is done by denying the request and using deny_info to redirect
the browser to the correct URL.

 http_port 80 defaultsite=www.wikihow.com vhost

 Is that correct, or should it be something else?

The above says you are running an accelerator with domain based virtual
host support, and old HTTP/1.0 clients not sending Host headers will get
be processed as requests for the www.wikihow.com domain.


 FATAL: Too many queued url_rewriter requests (72 on 12)

 This didn't seem to be a problem on our other server which was running
 2.6.STABLE3-20060825, which we just moved off of recently. The new
 server is currently only getting half of the total traffic the old
 server was receiving.

Odd.. Squid-2.6 is the same as 2.5 there..

 2 questions: Could this be a problem particular to  2.6STABLE6? Are
 there any serious drawbacks to having a lot of url_rewrite_children ?

Main drawback is memory usage by the helper instances.

What is the URL-rewriter used for? And what does it do? IF it's just
local processing of the URL with no external lookups then a single
helper instance is sufficient, but requires the helper to be modified to
use the concurrent protocol.

 Ideally I'd like a configuration that doesn't rely on
 url_rewrite_program, I haven't figured out the configuration for that
 yet though.

If you described for what purpose you use the url rewriter helper then
we maybe could help you with that.

Regards
Henrik





[squid-users] url rewriting with squid 2.6

2006-08-23 Thread Travis Derouin

Hi,

We have been doing url rewriting with Squid 2.5 with success so far,
but we're having issues getting it to work on 2.6. We have a few
hostnames (wikihow.net, wikihow.org) that we would like 301 redirected
to wikihow.com, and we have 2 back-end apache servers. I've played
around with forceddomain for the cache_peer settings, turning it on
and off and it doesn't seem to do anything for our situation.

I've also copied over our previously working redirector script and set
it up as url_rewrite_program, and it's not being called (I verified
this by putting some logging statements in redirect.pl and nothing is
being written to the log, although I can see it running when I do a ps
-aux).

here are some settings we've been using:

url_rewrite_program /usr/local/squid2.6/sbin/redirect.pl

http_port 80 defaultsite=www.wikihow.com
#http_port 80
#cache_peer 10.234.169.204 parent 80 0 no-query originserver
round-robin forceddomain=www.wikihow.com
#cache_peer 10.234.169.201 parent 80 0 no-query originserver
round-robin forceddomain=www.wikihow.com
cache_peer 10.234.169.204 parent 80 0 no-query originserver round-robin
cache_peer 10.234.169.201 parent 80 0 no-query originserver round-robin

acl port80 port 80
acl mysites dstdomain www.wikihow.com
http_access allow mysites port80

Any suggestions?
Travis


[squid-users] http accelerator and timeouts

2006-07-05 Thread Travis Derouin

Hi,

We're running Squid in reverse proxy as an http accelerator. When our
back end web server that generates the uncached content has problems
and becomes unresponsive, so does our front-end Squid server. is there
a way that we can have Squid ignore this and continue to serve cached
content  to users when a certain timeout has passed when contacting
the back end apache server? it would be helpful in two ways, one,
vistiors accessing cached content won't experience the interruption,
and two, connections to the back end server won't pile up, adding to
the problem.

any suggestions?

thanks,
travis