[squid-users] Re: Carp issue - not balancing load properly

2011-10-08 Thread david robertson
Nevermind this...  Don't ask :(

On Sat, Oct 8, 2011 at 8:45 PM, david robertson  wrote:
> Hello, I'm having a bit of an issue with CARP, specifically balancing the 
> load.
>
> I have 3 frontend servers that cache only to memory, and 2 backend
> servers that cache only to disk (one aufs device, and one coss device
> on each).  The two backend servers are running on identical hardware,
> and running an identical version of Squid (2.7.STABLE9).  There's
> nothing funky about the configs of either the backends, nor the
> frontends.
>
> The issue is that one backend server is always receiving exactly twice
> the amount of traffic from the 3 frontend servers at all times.
>
> Frontend cache_peer lines:
> cache_peer 192.168.193.78 parent 4001 0 carp login=PASS name=backend no-digest
> cache_peer 192.168.193.116 parent 4001 0 carp login=PASS name=backend2 
> no-digest
>
> No matter what I try, the second server in the list (.116) gets twice
> the traffic that .78 gets.
>
> Output of the cluster stats, gathered from squidclient:
>
> hostname        hits/sec        cacherate
> =
> squid           47              43%
> squid2          39              34%
> squid4          39              33%
> -
>                125             36%
>
> squid3          42              25%
> squid5          85              27%
>
> (Yes, I know 125 hits/sec is low, but it's a low-traffic hour, and we
> frequently get large bursts of traffic)
>
>
> Any help would be much appreciated.
>


[squid-users] Carp issue - not balancing load properly

2011-10-08 Thread david robertson
Hello, I'm having a bit of an issue with CARP, specifically balancing the load.

I have 3 frontend servers that cache only to memory, and 2 backend
servers that cache only to disk (one aufs device, and one coss device
on each).  The two backend servers are running on identical hardware,
and running an identical version of Squid (2.7.STABLE9).  There's
nothing funky about the configs of either the backends, nor the
frontends.

The issue is that one backend server is always receiving exactly twice
the amount of traffic from the 3 frontend servers at all times.

Frontend cache_peer lines:
cache_peer 192.168.193.78 parent 4001 0 carp login=PASS name=backend no-digest
cache_peer 192.168.193.116 parent 4001 0 carp login=PASS name=backend2 no-digest

No matter what I try, the second server in the list (.116) gets twice
the traffic that .78 gets.

Output of the cluster stats, gathered from squidclient:

hostnamehits/seccacherate
=
squid   47  43%
squid2  39  34%
squid4  39  33%
-
125 36%

squid3  42  25%
squid5  85  27%

(Yes, I know 125 hits/sec is low, but it's a low-traffic hour, and we
frequently get large bursts of traffic)


Any help would be much appreciated.


[squid-users] Re: Does stale-if-error apply to a 400 status?

2010-11-16 Thread david robertson
Sorry, I've forgotten the details again:

Squid Cache: Version 2.7.STABLE9-20101104
configure options:  '--prefix=/squid2' '--enable-async-io'
'--enable-icmp' '--enable-useragent-log' '--enable-snmp'
'--enable-cache-digests' '--enable-follow-x-forwarded-for'
'--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
-O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
-fno-exceptions' '--enable-htcp'



On Tue, Nov 16, 2010 at 10:58 AM, david robertson  wrote:
> Hello, I have a bit of an urgent issue - Squid is serving 400 errors,
> and I'd like to avoid that.  Ideally, we want Squid to serve the
> object that it has in cache, instead of the 400.  I have
> stale-if-error=1800 in the headers, but squid is still serving a 400
> whenever it gets it from the origin (webserver).  We don't want 400's
> served at all.
>
> So, as the subject says, does a stale-if-error header apply to 400
> status requests?
>


[squid-users] Does stale-if-error apply to a 400 status?

2010-11-16 Thread david robertson
Hello, I have a bit of an urgent issue - Squid is serving 400 errors,
and I'd like to avoid that.  Ideally, we want Squid to serve the
object that it has in cache, instead of the 400.  I have
stale-if-error=1800 in the headers, but squid is still serving a 400
whenever it gets it from the origin (webserver).  We don't want 400's
served at all.

So, as the subject says, does a stale-if-error header apply to 400
status requests?


Re: [squid-users] cache_object://$host/info confusion

2010-11-10 Thread david robertson
These are single servers with one frontend, and two backends.
Limitations in our monitoring utility (Zenoss) prevent me from polling
3 squid instances on a single host.  I poll the frontends for now, but
I'd also like to have some stats on the backends.

However scripting snmpwalk is ideal - I have no idea why I didn't
think of that before...

You're a genius, man.  A genius.


On Wed, Nov 10, 2010 at 5:07 AM, Amos Jeffries  wrote:
>
> Harping way back...
>
>>>>> On Tue, 9 Nov 2010 20:59:56 -0500, david robertson wrote:
>>>>>>
>>>>>> I'm in the process of writing a script to give me some cache hit
>>>>>> statistics for my cluster.  There's some confusion on the cache_object
>>>>>> info output, though.  For example, this particular host only caches to
>>>>>> memory, however this is the output I get:
>
> Why not use SNMP instead of parsing the text? The OIDs are available in the
> wiki for all squid versions and SNMP tools are readily available. you can
> even script snmpwalk directly if you have to.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.9
>  Beta testers wanted for 3.2.0.3
>


Re: [squid-users] cache_object://$host/info confusion

2010-11-09 Thread david robertson
I'll file one when I get a chance.

So, back to the original question though - out of those two stats, on
a memory caching only server, which one should be correct?  Request
Memory Hit or Request Hit?

On Tue, Nov 9, 2010 at 10:06 PM, Amos Jeffries  wrote:
>
>> On Tue, Nov 9, 2010 at 9:27 PM, Amos Jeffries wrote:
>>> On Tue, 9 Nov 2010 20:59:56 -0500, david robertson wrote:
>>>> I'm in the process of writing a script to give me some cache hit
>>>> statistics for my cluster.  There's some confusion on the cache_object
>>>> info output, though.  For example, this particular host only caches to
>>>> memory, however this is the output I get:
>>>>
>>>>         Request Hit Ratios:     5min: 40.0%, 60min: 39.2%
>>>>         Request Memory Hit Ratios:      5min: 69.7%, 60min: 69.6%
>>>>
>>>> For a host that's only caching to memory, there's a pretty large
>>>> discrepancy between the two listed above.  What's the difference
>>>> between the two above?
>>>>
>>>> Thanks in advance.
>>>
>>> Squid version? and how did you make it "memory only"?
>>>
>
> On Tue, 9 Nov 2010 21:32:57 -0500, david robertson wrote:
>> Sorry:
>> Squid Cache: Version 2.7.STABLE9-20101104
>>
>> The frontend servers only cache to memory, via
>> cache_dir null /dev/null
>>
>
> Okay. Thats correct, so its possibly a bug of some sort then.
>
> Amos
>


Re: [squid-users] cache_object://$host/info confusion

2010-11-09 Thread david robertson
Sorry:
Squid Cache: Version 2.7.STABLE9-20101104

The frontend servers only cache to memory, via
cache_dir null /dev/null


On Tue, Nov 9, 2010 at 9:27 PM, Amos Jeffries  wrote:
> On Tue, 9 Nov 2010 20:59:56 -0500, david robertson 
> wrote:
>> I'm in the process of writing a script to give me some cache hit
>> statistics for my cluster.  There's some confusion on the cache_object
>> info output, though.  For example, this particular host only caches to
>> memory, however this is the output I get:
>>
>>         Request Hit Ratios:     5min: 40.0%, 60min: 39.2%
>>         Request Memory Hit Ratios:      5min: 69.7%, 60min: 69.6%
>>
>> For a host that's only caching to memory, there's a pretty large
>> discrepancy between the two listed above.  What's the difference
>> between the two above?
>>
>> Thanks in advance.
>
> Squid version? and how did you make it "memory only"?
>
> Amos
>


[squid-users] cache_object://$host/info confusion

2010-11-09 Thread david robertson
I'm in the process of writing a script to give me some cache hit
statistics for my cluster.  There's some confusion on the cache_object
info output, though.  For example, this particular host only caches to
memory, however this is the output I get:

Request Hit Ratios: 5min: 40.0%, 60min: 39.2%
Request Memory Hit Ratios:  5min: 69.7%, 60min: 69.6%

For a host that's only caching to memory, there's a pretty large
discrepancy between the two listed above.  What's the difference
between the two above?

Thanks in advance.


Re: [squid-users] Squid is caching the 404 Error Msg...

2010-11-08 Thread david robertson
This is what you're looking for:

#  TAG: negative_ttltime-units
#   Time-to-Live (TTL) for failed requests.  Certain types of
#   failures (such as "connection refused" and "404 Not Found") are
#   negatively-cached for a configurable amount of time.  The
#   default is 5 minutes.  Note that this is different from
#   negative caching of DNS lookups.
#
#Default:
# negative_ttl 5 minutes

Just set it to 0 and it won't cache 404's


2010/11/8 karj :
> Dear Expert,
>
> I'm using:
> - Squid Cache: Version Squid Cache: Version 2.7.STABLE9
>
> My Problem  is.
>
> When i'm using
> Cache-Control headers in the origin iis ( post-check=3600, pre-check=43200 )
>
> Squid is caching the 404 Error Msg.
>
> In the first two or thre requests i have
> TCP_MISS:FIRST_UP_PARENT  ---> squid goes back to origin server
>
> After while i'm getting
> 404 926 TCP_NEGATIVE_HIT:NONE ---> squid servers 404 from it's cache
>
>
> I don't want to cache  Error Msgs.
> Error Msgs should never be cached.
> How can I do that.?
>
>
> thanks in advance
>


Re: [squid-users] "This cache is currently building its digest."

2010-11-08 Thread david robertson
> What is your digest rebuild time set to?
>  your cache_dir and cache_mem sizes?
>  and your negative_ttl setting?

digest_rebuild_period 60 minutes
negative_ttl 1 minute
backends use a cache_dir of 20gb (8mb cache_mem)
frontends use a cache_mem of 2gb (no cache_dir)


> What do you get back when making a manual digest fetch from one of the
> Squid?
>  squidclient -h $squid-visible_hostname
> mgr:squid-internal-periodic/store_digest

I get 'Invalid URL' when trying to hit mgr:squid-internal-periodic/store_digest

I've since set up HTCP, and it seems to be working fine - however this
brings up one additional (unrelated to original problem) question:
Does 2.7 have support for forwarding HTCP CLR's?  If so, it doesn't
seem like it's working.

Thanks for the help, by the way.


>> Squid Cache: Version 2.7.STABLE9
>> configure options:  '--prefix=/squid2' '--enable-async-io'
>> '--enable-icmp' '--enable-useragent-log' '--enable-snmp'
>> '--enable-cache-digests' '--enable-follow-x-forwarded-for'
>> '--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
>> '--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
>> '--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
>> -O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
>> -fno-exceptions'
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.9
>  Beta testers wanted for 3.2.0.2
>


[squid-users] Re: "This cache is currently building its digest."

2010-11-06 Thread david robertson
Anyone have any ideas?

On Wednesday, November 3, 2010, david robertson  wrote:
> Hello, I'm having a cache-digest related issue that I'm hoping someone
> here can help me with.
>
> I've got a few frontend servers, which talk to a handful of backend
> servers.  Everything is working swimmingly, with the exception of
> cache digests.
>
> The digests used to work without issue, but suddenly all of my backend
> servers have stopped building their digests.  They all say "This cache
> is currently building its digest." when you try to access the digest.
> It's as if the digest rebuild never finishes.  Nothing has changed
> with my configuration, and all of the backends (6 of them) have
> started doing this at roughly the same time.
>
> My first thought would be cache corruption, but I've reset all of the
> caches, and the issue still persists.
>
> Any ideas?
>
>
> Squid Cache: Version 2.7.STABLE9
> configure options:  '--prefix=/squid2' '--enable-async-io'
> '--enable-icmp' '--enable-useragent-log' '--enable-snmp'
> '--enable-cache-digests' '--enable-follow-x-forwarded-for'
> '--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
> '--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
> '--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
> -O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
> -fno-exceptions'
>


[squid-users] "This cache is currently building its digest."

2010-11-03 Thread david robertson
Hello, I'm having a cache-digest related issue that I'm hoping someone
here can help me with.

I've got a few frontend servers, which talk to a handful of backend
servers.  Everything is working swimmingly, with the exception of
cache digests.

The digests used to work without issue, but suddenly all of my backend
servers have stopped building their digests.  They all say "This cache
is currently building its digest." when you try to access the digest.
It's as if the digest rebuild never finishes.  Nothing has changed
with my configuration, and all of the backends (6 of them) have
started doing this at roughly the same time.

My first thought would be cache corruption, but I've reset all of the
caches, and the issue still persists.

Any ideas?


Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/squid2' '--enable-async-io'
'--enable-icmp' '--enable-useragent-log' '--enable-snmp'
'--enable-cache-digests' '--enable-follow-x-forwarded-for'
'--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
-O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
-fno-exceptions'


[squid-users] Re: TCP: too many of orphaned sockets

2010-10-07 Thread david robertson
Sorry, forgot the details:

Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/squid2' '--enable-async-io'
'--enable-icmp' '--enable-useragent-log' '--enable-snmp'
'--enable-cache-digests' '--enable-follow-x-forwarded-for'
'--enable-storeio=null,aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--with-pthreads' 'CFLAGS=-DNUMS=60 -march=nocona
-O3 -pipe -fomit-frame-pointer -funroll-loops -ffast-math
-fno-exceptions'

Linux server.domain.com 2.6.18-8.1.10.el5 #1 SMP Thu Aug 30 20:43:28
EDT 2007 x86_64 x86_64 x86_64 GNU/Linux



On Thu, Oct 7, 2010 at 10:52 AM, david robertson  wrote:
> Hello, I know this isn't specifically a squid thing, but I think it
> might be semi-related.
>
> I've currently got a Dell 6850 (16gb ram, 16 logical processors)
> server set up, based on the 'one frontend, two backends' example on
> squid-cache.org.  Everything will be fine, but once the cache starts
> getting 170+ inbound hits/sec, I get this in dmesg, and the load
> shoots up on the server, causing squid to grind to a halt.
>
> Out of socket memory
> printk: 29 messages suppressed.
> Out of socket memory
> printk: 48 messages suppressed.
> TCP: too many of orphaned sockets
> printk: 53 messages suppressed.
> Out of socket memory
> printk: 53 messages suppressed.
> TCP: too many of orphaned sockets
> printk: 101 messages suppressed.
> TCP: too many of orphaned sockets
> printk: 328 messages suppressed.
>
>
> Any ideas on what I can do to alleviate this?
>


[squid-users] TCP: too many of orphaned sockets

2010-10-07 Thread david robertson
Hello, I know this isn't specifically a squid thing, but I think it
might be semi-related.

I've currently got a Dell 6850 (16gb ram, 16 logical processors)
server set up, based on the 'one frontend, two backends' example on
squid-cache.org.  Everything will be fine, but once the cache starts
getting 170+ inbound hits/sec, I get this in dmesg, and the load
shoots up on the server, causing squid to grind to a halt.

Out of socket memory
printk: 29 messages suppressed.
Out of socket memory
printk: 48 messages suppressed.
TCP: too many of orphaned sockets
printk: 53 messages suppressed.
Out of socket memory
printk: 53 messages suppressed.
TCP: too many of orphaned sockets
printk: 101 messages suppressed.
TCP: too many of orphaned sockets
printk: 328 messages suppressed.


Any ideas on what I can do to alleviate this?


Re: [squid-users] Ignore part of a URL for caching

2010-08-13 Thread david robertson
Thanks Leonardo, I have everything working as required :)


On Fri, Aug 13, 2010 at 11:32 AM, Leonardo Rodrigues
 wrote:
>
>    i believe you can do it and the topics/wiki articles about youtube
> caching should give you interesting points about that.
>
> Em 13/08/2010 12:17, david robertson escreveu:
>>
>> Hello, I have a question concerning the caching of specific URLs:
>>
>> I'm currently using squid in an accelerator config, and everything is
>> working perfectly fine.  However I've just been given a request to
>> ignore part of a URL when it comes to caching.  For example:
>>
>> http://domain.com/v/subdir/subdir/file.js?variable1=variable1&variable2=variable2&variable3=variable3
>>
>> What they're asking is for this URL to be cached, but ignore variable2
>> in the cache string.  In other words, cache it as
>>
>> http://domain.com/v/subdir/subdir/file.js?variable1=variable1&variable3=variable3
>>
>>  From what they told me, variable2 is dynamic and is different on every
>> hit.  I have no idea why they do this, since it's exactly the same
>> page served every time.  This obviously fills the cache with the exact
>> same content for thousands of different URLs.
>>
>> We can do this with Akamai, have them not include part of the URL as
>> the cache key, but can this be done with squid?
>>
>
>
> --
>
>
>        Atenciosamente / Sincerily,
>        Leonardo Rodrigues
>        Solutti Tecnologia
>        http://www.solutti.com.br
>
>        Minha armadilha de SPAM, NÃO mandem email
>        gertru...@solutti.com.br
>        My SPAMTRAP, do not email it
>
>
>
>
>


[squid-users] Ignore part of a URL for caching

2010-08-13 Thread david robertson
Hello, I have a question concerning the caching of specific URLs:

I'm currently using squid in an accelerator config, and everything is
working perfectly fine.  However I've just been given a request to
ignore part of a URL when it comes to caching.  For example:
http://domain.com/v/subdir/subdir/file.js?variable1=variable1&variable2=variable2&variable3=variable3

What they're asking is for this URL to be cached, but ignore variable2
in the cache string.  In other words, cache it as
http://domain.com/v/subdir/subdir/file.js?variable1=variable1&variable3=variable3

>From what they told me, variable2 is dynamic and is different on every
hit.  I have no idea why they do this, since it's exactly the same
page served every time.  This obviously fills the cache with the exact
same content for thousands of different URLs.

We can do this with Akamai, have them not include part of the URL as
the cache key, but can this be done with squid?


Re: [squid-users] How does Squid prevent stampeding during a cache miss?

2010-08-03 Thread david robertson
Thank you Henrik.  I have one last question concerning
stale-while-revalidate, as the docs don't seem to answer it.

Say you set stale-while-revalidate to something like 30 minutes.  Once
validation occurs, does squid continue to serve the stale content for
30 minutes (even though the object has infact been updated), or will
all new requests immediately be served the new, updated object?


2010/8/2 Henrik Nordström :
> sön 2010-08-01 klockan 11:52 -0400 skrev david robertson:
>> On Sun, Aug 1, 2010 at 1:12 AM, Amos Jeffries  wrote:
>> > If stampeeding is a worry the stale-if-error and stale-while-revalidate
>> > Cache-Control: options would also be useful (sent from the origin web
>> > server). These are supported by 2.7.
>>
>> Question - why aren't these options documented anywhere?  Also, why
>> can't we set this in squid itself, rather than messing with
>> Cache-Control headers?
>
> You can override them from squid.conf as well. But it's recommended to
> use Cache-Control if possible as this places the configuration where it
> really belongs and can best be controlled at the desired detail.
>
> http://www.squid-cache.org/Versions/v2/2.7/cfgman/refresh_pattern.html
>
> Regards
> Henrik
>
>


Re: [squid-users] How does Squid prevent stampeding during a cache miss?

2010-08-01 Thread david robertson
On Sun, Aug 1, 2010 at 1:12 AM, Amos Jeffries  wrote:
> If stampeeding is a worry the stale-if-error and stale-while-revalidate
> Cache-Control: options would also be useful (sent from the origin web
> server). These are supported by 2.7.

Question - why aren't these options documented anywhere?  Also, why
can't we set this in squid itself, rather than messing with
Cache-Control headers?


Re: [squid-users] How does Squid prevent stampeding during a cache miss?

2010-07-31 Thread david robertson
Squid 2.x supports this:

#  TAG: collapsed_forwarding(on|off)
#   This option enables multiple requests for the same URI to be
#   processed as one request. Normally disabled to avoid increased
#   latency on dynamic content, but there can be benefit from enabling
#   this in accelerator setups where the web servers are the bottleneck
#   and reliable and returns mostly cacheable information.

It's exactly what you're looking for.  Basically it causes Squid to
request the URL from the origin server one time, and serves the other
requests the stale content.



On Sat, Jul 31, 2010 at 11:44 AM, Ryan Chan  wrote:
> For example, put Squid as a reverse proxy mode to
> http://www.example.com/heavyduty.php (expire set to 1 hour, need 10s
> to generate).
>
> When the file is just expired, large amount of clients (e.g. 10K)
> request for this file reached the Squid at the same time.
>
> Are there any heuristics performed by Squid to avoid forwarding all
> the 10K requests to the upstream?
>