varnishncsa of backend requests (changeset 4480)

2010-03-09 Thread Rob S
Hi,

Since http://varnish-cache.org/changeset/4480, we've lost the ability to 
view backend requests in varnishncsa.

Is there a recommended work around?  Can varnishlog be coaxed into 
spitting out similar information, or is there another solution?

Thanks,


Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Understand "hit for pass" cache objects

2010-02-15 Thread Rob S
Rob S wrote:
> Justin Pasher wrote:
>   
>> Hello,
>>
>> Herein lies my dilemma. A request for the same URL 
>> (http://www.example.com/) is sometimes cacheable and sometimes not 
>> cacheable (it usually depends on whether it's the first time a user 
>> visits the site and the Set-Cookie header has to be sent). What this 
>> means is if I have a very heavy hit URL as a landing page from Google, 
>> most of the time there will be a "hit for pass" cache object in Varnish, 
>> since most people going to that page will have a Set-Cookie header.
>> 
>
> Justin,
>
> Rather than answer your question (which other people are answering), I'd 
> suggest you reconsider using sessions and selectively caching full 
> pages.  There are several other solutions that might work for you - for 
> example, including personalised content via ESI, or overlaying it 
> client-side with javascript.  We're using a combination of these to 
> great effect - and ensure that any page containing a session cookie is 
> never cached.
>
> Obviously the based 
I meant "appropriate"  - goodness knows what I was typing!

> answer would depend on the nature of your apps, but 
> it might be worth looking at in the longer term.  There's more than one 
> way to crack an egg.
>
>   
> Rob
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>   

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Understand "hit for pass" cache objects

2010-02-15 Thread Rob S
Justin Pasher wrote:
> Hello,
>
> Herein lies my dilemma. A request for the same URL 
> (http://www.example.com/) is sometimes cacheable and sometimes not 
> cacheable (it usually depends on whether it's the first time a user 
> visits the site and the Set-Cookie header has to be sent). What this 
> means is if I have a very heavy hit URL as a landing page from Google, 
> most of the time there will be a "hit for pass" cache object in Varnish, 
> since most people going to that page will have a Set-Cookie header.

Justin,

Rather than answer your question (which other people are answering), I'd 
suggest you reconsider using sessions and selectively caching full 
pages.  There are several other solutions that might work for you - for 
example, including personalised content via ESI, or overlaying it 
client-side with javascript.  We're using a combination of these to 
great effect - and ensure that any page containing a session cookie is 
never cached.

Obviously the based answer would depend on the nature of your apps, but 
it might be worth looking at in the longer term.  There's more than one 
way to crack an egg.



Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish load balancer & (keep session)

2010-02-08 Thread Rob S
Just to copy in the list... the problem Axel was seeing is one that 
troubled us for a bit - getting unexpected 503 responses.

Solution: Make sure the top of "sub vcl_recv" has a default backend:

set req.backend = xxx;

You can override this later with conditional statements, or whatever, 
but having a default helps prevent 503s.


Rob


Axel DEAU wrote:
> Hi,
>
> It seems that with this method it works very well I thanks you a lot for your 
> help and wich you have a nice day
>
> Best regard
>
> Axel DEAU | NOVACTIVE SYTEME
>
> Administrateur Systeme et Reseaux
> mail : a.d...@novactive-systemes.com
> Tel : + 33 1 48 24 33 60
> Fax : + 33 1 48 24 33 54
> www.novactive.com
>
>
> -Message d'origine-
> De : Rob S [mailto:rtshils...@gmail.com] 
> Envoyé : lundi 8 février 2010 11:50
> À : Axel DEAU
> Cc : Sacha MILADINOVIC
> Objet : Re: Varnish load balancer & (keep session)
>
> At the very top of "sub vcl_recv", please add:
>
> set req.backend = b1;
>
> This will set the default backend.
>
> Can you also send me the output of
>
> # varnishlog |grep Backend_health
> 0 Backend_health - server7 Still healthy 4--X-S-RH 10 8 10 0.007498 
> 0.009539 HTTP/1.1 200 OK
> 0 Backend_health - server2 Still healthy 4--X-S-RH 10 8 10 0.006767 
> 0.013814 HTTP/1.1 200 OK
> 0 Backend_health - server3 Still healthy 4--X-S-RH 10 8 10 0.012027 
> 0.010841 HTTP/1.1 200 OK
>
> from before and after you stop apache on the first and second backends.
>
>
> Rob
>
>
> Axel DEAU wrote:
>   
>> Hi,
>>
>> Absolutely
>>
>> Axel DEAU | NOVACTIVE SYTEME
>>
>> Administrateur Systeme et Reseaux
>> mail : a.d...@novactive-systemes.com
>> Tel : + 33 1 48 24 33 60
>> Fax : + 33 1 48 24 33 54
>> www.novactive.com
>>
>>
>> -Message d'origine-
>> De : Rob S [mailto:rtshils...@gmail.com] 
>> Envoyé : lundi 8 février 2010 10:24
>> À : Axel DEAU
>> Cc : Sacha MILADINOVIC
>> Objet : Re: Varnish load balancer & (keep session)
>>
>> Axel,
>>
>> Can you post your entire VCL, and I'll see why this is happening.
>>
>> Rob
>>
>> Axel DEAU wrote:
>>   
>> 
>>> Hi Rob,
>>>
>>> Thanks for the reply, for 1) when I shut down the second backend all the 
>>> traffic goes to the first backend but,
>>> When I shut down the first backend even if the second backend mark "Still 
>>> healthy" the error 503 appears.
>>>
>>> For the other point I'm agreed with you...
>>>
>>> -Message d'origine-
>>> De : Rob S [mailto:rtshils...@gmail.com] 
>>> Envoyé : dimanche 7 février 2010 13:33
>>> À : Axel DEAU
>>> Cc : varnish-misc@projects.linpro.no
>>> Objet : Re: Varnish load balancer & (keep session)
>>>
>>> Hi,
>>>
>>> To answer some of your questions:
>>>
>>> 1) 503 error when shutting down a backend:  When you shutdown the 
>>> backend, do you see varnishlog say that the backend is healthy or sick?  
>>> If one is sick, then the other should get the traffic if your VCL 
>>> contains set req.backend = b1;
>>>
>>> 2) Vanish load balanced does not keep e-commerce sessions for PHP.  The 
>>> simplest solution to this is to install memcache, and put the following 
>>> lines in your php.ini file:
>>>
>>> [Session]
>>> session.save_handler = memcached
>>> session.save_path = "memcache-server1:11211,memcache-server2:11211"
>>>
>>> instead of session.save_handler = files
>>>
>>> However, I can't say for certain that this will definitely work - it 
>>> depends on how your ecommerce application operates.
>>>
>>> 3) S-flag: I'm not sure about this, but my gut feeling is that it's not 
>>> causing the problems you're seeing.
>>>
>>>
>>>
>>> Rob
>>>
>>>
>>> Axel DEAU wrote:
>>>   
>>> 
>>>   
>>>> Version: 2.0.6-1
>>>>
>>>> Insall: .deb
>>>>
>>>> Os: Debian 5.0.3
>>>>
>>>> Hi,
>>>>
>>>> I've got two backends running apache2: front1.domain.com & 
>>>> front2.domain.com, set with the load balancing configuration 
>>>> from http://varnish-cache.org/wiki/LoadBalancing.
>>>>
>>>> _The issue is, when I shutdown apache2 of the first backend

Re: Varnish load balancer & (keep session)

2010-02-07 Thread Rob S
Hi,

To answer some of your questions:

1) 503 error when shutting down a backend:  When you shutdown the 
backend, do you see varnishlog say that the backend is healthy or sick?  
If one is sick, then the other should get the traffic if your VCL 
contains set req.backend = b1;

2) Vanish load balanced does not keep e-commerce sessions for PHP.  The 
simplest solution to this is to install memcache, and put the following 
lines in your php.ini file:

[Session]
session.save_handler = memcached
session.save_path = "memcache-server1:11211,memcache-server2:11211"

instead of session.save_handler = files

However, I can't say for certain that this will definitely work - it 
depends on how your ecommerce application operates.

3) S-flag: I'm not sure about this, but my gut feeling is that it's not 
causing the problems you're seeing.



Rob


Axel DEAU wrote:
>
> Version: 2.0.6-1
>
> Insall: .deb
>
> Os: Debian 5.0.3
>
> Hi,
>
> I've got two backends running apache2: front1.domain.com & 
> front2.domain.com, set with the load balancing configuration 
> from http://varnish-cache.org/wiki/LoadBalancing.
>
> _The issue is, when I shutdown apache2 of the first backend varnish 
> don't switch to the second and display "Error 503 Service 
> Unavailable", is that a normal answer from varnish?_
>
> Other question, _does varnish load balancer keep php sessions, if yes 
> how will I do?_
>
> Varnishlog :
>
> 0 Backend_health - front1 Still healthy 4--X-RH 10 8 10 0.040008 
> 0.039814 HTTP/1.1 200 OK
>
> 0 Backend_health - front2 Still healthy 4--X-RH 10 8 10 0.066948 
> 0.066591 HTTP/1.1 200 OK
>
> The S flag is missing in my log, is that an issue…
>
> "4--X-S-RH" to notify that TCP socket shutdown succeeded 
> from http://varnish-cache.org/wiki/BackendPolling
>
> Part of default.vcl
>
> backend front1 {
>
>   .host = "front1.domain.com";
>
>   .port = "80";
>
>   .probe = { .url = "/";
>
>  .interval = 10s;
>
>  .timeout = 5s;
>
>  .window = 10;
>
>  .threshold = 8;
>
>  }
>
> }
>
>  
>
> backend front2 {
>
>   .host = "front2.domain.com";
>
>   .port = "80";
>
>   .probe = { .url = "/";
>
>  .interval = 10s;
>
>  .timeout = 5s;
>
>  .window = 10;
>
>  .threshold = 8;
>
>  }
>
> }
>
>  
>
> director b1 random
>
> {
>
>{ .backend = front1; .weight = 5; }
>
>{ .backend = front2; .weight = 1; }
>
> }
>
>  
>
> #director b1 round-robin {
>
> #{ .backend = front1; }
>
> #{ .backend = front2; }
>
> #}
>
> Thanks for your help...
>
> 
>
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>   

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: 503 Errors on POST

2010-02-07 Thread Rob S
Torrance,

Can you upload a full tcpdump packet trace both between client and 
varnish, and varnish and backend, together with the varnish logs and 
varnish config, and I'll take a look.


Rob

Torrance wrote:
> I've no response to this on list, and the problem is ongoing. Should I
> file this as a bug?
>
> Torrance
>
>
>
> On 30/01/10 12:32 PM, Torrance wrote:
>   
>> Hi Tollef,
>>
>> I've pasted the logs of two failed requests below. As you can see,
>> they're both in response to POST requests, though I was overstating the
>> frequency at which these errors are occurring: they're occurring about
>> 10% of the time.
>>
>> To be honest, I don't entirely understand the logs or their format, but
>> I hope I've captured the important details. (Session IDs have been
>> deleted, btw).
>>
>> Many thanks,
>> Torrance
>>
>>
>>15 ReqStart c 125.236.128.219 51361 561006524
>>15 RxRequestc POST
>>15 RxURLc /node/78063/edit
>>15 RxProtocol   c HTTP/1.1
>>15 RxHeader c Host: indymedia.org.nz
>>15 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS
>> X 10.6; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6
>>15 RxHeader c Accept:
>> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>>15 RxHeader c Accept-Language: en-gb,en;q=0.5
>>15 RxHeader c Accept-Encoding: gzip,deflate
>>15 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
>>15 RxHeader c Keep-Alive: 115
>>15 RxHeader c Connection: keep-alive
>>15 RxHeader c Referer: http://indymedia.org.nz/node/78063/edit
>>15 RxHeader c Cookie: comment_info_name=Tester;
>> SESSx=x;
>> SESSx=x; has_js=1
>>15 RxHeader c Content-Type: multipart/form-data;
>> boundary=---1850078892860212931738819713
>>15 RxHeader c Content-Length: 16978
>>15 VCL_call c recv
>>15 VCL_return   c pass
>>15 VCL_call c pass
>>15 VCL_return   c pass
>>15 Backend  c 10 default default
>>10 TxRequestb POST
>>10 TxURLb /node/78063/edit
>>10 TxProtocol   b HTTP/1.1
>>10 TxHeader b Host: indymedia.org.nz
>>10 TxHeader b User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS
>> X 10.6; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6
>>10 TxHeader b Accept:
>> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>>10 TxHeader b Accept-Language: en-gb,en;q=0.5
>>10 TxHeader b Accept-Encoding: gzip,deflate
>>10 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
>>10 TxHeader b Referer: http://indymedia.org.nz/node/78063/edit
>>10 TxHeader b Cookie: comment_info_name=Tester;
>> SESSx=x;
>> SESSx=x; has_js=1
>>10 TxHeader b Content-Type: multipart/form-data;
>> boundary=---1850078892860212931738819713
>>10 TxHeader b Content-Length: 16978
>>10 TxHeader b X-Forwarded-For: 125.236.128.219
>>10 TxHeader b X-Varnish: 561006524
>>10 TxHeader b X-Forwarded-For: 125.236.128.219
>>10 BackendClose b default
>>15 VCL_call c error
>>15 VCL_return   c deliver
>>15 Length   c 465
>>15 VCL_call c deliver
>>15 VCL_return   c deliver
>>15 TxProtocol   c HTTP/1.1
>>15 TxStatus c 503
>>15 TxResponse   c Service Unavailable
>>15 TxHeader c Server: Varnish
>>15 TxHeader c Retry-After: 0
>>15 TxHeader c Content-Type: text/html; charset=utf-8
>>15 TxHeader c Content-Length: 465
>>15 TxHeader c Date: Fri, 29 Jan 2010 23:00:42 GMT
>>15 TxHeader c X-Varnish: 561006524
>>15 TxHeader c Age: 1
>>15 TxHeader c Via: 1.1 varnish
>>15 TxHeader c Connection: close
>>15 ReqEnd   c 561006524 1264806040.957435846 1264806042.241542339
>> 4.125935793 1.284075260 0.31233
>>15 SessionClose c error
>>15 StatSess c 125.236.128.219 51361 14 1 3 0 3 2 1410 49426
>> 0 StatAddr - 125.236.128.219 0 1102 34 74 0 32 42 38287 1054700
>>
>>
>>21 ReqStart c 125.236.128.219 53669 561007510
>>21 RxRequestc POST
>>21 RxURLc /node/78063/edit
>>21 RxProtocol   c HTTP/1.1
>>21 RxHeader c Host: indymedia.org.nz
>>21 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS
>> X 10.6; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6
>>21 RxHeader c Accept:
>> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>>21 RxHeader c Accept-Language: en-gb,en;q=0.5
>>21 RxHeader c Accept-Encoding: gzip,deflate
>>21 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
>>21 RxHeader c Keep-Alive: 115
>>21 RxHeader c Connection: keep-

Re: Varnish load balancer & (keep session)

2010-02-06 Thread Rob S
Hi,

To answer some of your questions:

1) 503 error when shutting down a backend:  When you shutdown the 
backend, do you see varnishlog say that the backend is healthy or sick?  
If one is sick, then the other should get the traffic if your VCL 
contains set req.backend = b1;

2) Vanish load balanced does not keep e-commerce sessions for PHP.  The 
simplest solution to this is to install memcache, and put the following 
lines in your php.ini file:

[Session]
session.save_handler = memcached
session.save_path = "memcache-server1:11211,memcache-server2:11211"

instead of session.save_handler = files

However, I can't say for certain that this will definitely work - it 
depends on how your ecommerce application operates. 

3) S-flag: I'm not sure about this, but my gut feeling is that it's not 
causing the problems you're seeing.



Rob


alertebox wrote:
>
> Version: 2.0.6-1
>
> Insall: .deb
>
> Os: Debian 5.0.3
>
> Hi,
>
> I've got two backends running apache2: front1.domain.com & 
> front2.domain.com, set with the load balancing configuration 
> from http://varnish-cache.org/wiki/LoadBalancing.
>
> _The issue is, when I shutdown apache2 of the first backend varnish 
> don't switch to the second and display "Error 503 Service 
> Unavailable", is that a normal answer from varnish?_
>
> Other question, _does varnish load balancer keep php sessions for 
> e-commerce, if yes how will I do?_
>
> Varnishlog :
>
> 0 Backend_health - front1 Still healthy 4--X-RH 10 8 10 0.040008 
> 0.039814 HTTP/1.1 200 OK
>
> 0 Backend_health - front2 Still healthy 4--X-RH 10 8 10 0.066948 
> 0.066591 HTTP/1.1 200 OK
>
> _The S flag is missing in my log, is that an issue…_
>
> "4--X-S-RH" to notify that TCP socket shutdown succeeded 
> from http://varnish-cache.org/wiki/BackendPolling
>
> Part of default.vcl
>
> backend front1 {
>
>   .host = "front1.domain.com";
>
>   .port = "80";
>
>   .probe = { .url = "/";
>
>  .interval = 10s;
>
>  .timeout = 5s;
>
>  .window = 10;
>
>  .threshold = 8;
>
>  }
>
> }
>
>  
>
> backend front2 {
>
>   .host = "front2.domain.com";
>
>   .port = "80";
>
>   .probe = { .url = "/";
>
>  .interval = 10s;
>
>  .timeout = 5s;
>
>  .window = 10;
>
>  .threshold = 8;
>
>  }
>
> }
>
>  
>
> director b1 random
>
> {
>
>{ .backend = front1; .weight = 1; }
>
>{ .backend = front2; .weight = 1; }
>
> }
>
>  
>
> #director b1 round-robin {
>
> #{ .backend = front1; }
>
> #{ .backend = front2; }
>
> #}
>
> _Is that part of configuration is wrong_ ?
>
>  
>
> Thanks for your help...
>
>  
>
> 
>
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>   

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: feature request cache refresh

2010-01-20 Thread Rob S
Poul-Henning Kamp wrote:
> That is why grace is split in two.
> You have obj.grace and req.grace, and the minimum of the two is what
> governs grace mode.
>   
Thanks for the clarification.  I'll do some experimenting, then update 
the documentation at http://varnish.projects.linpro.no/wiki/Performance
>   
>> If grace stays as it is, then I'd like to have the ability for that 
>> "first client who requests content after expiry" to get stale content, 
>> and have a parallel process update the cached content.
>> 
>
> Yes, and since it is my birthday, I'll wish for a pony as well :-)
>
>   

I hope you have an extra nice day, and that you get some time to relax 
and do things you want!  Best wishes for the year ahead, and thank you 
for your hard work on Varnish.

> Doing grace the way we did was relatively simple, doing it the way
> it really should work is not, so that improvement is somewhere in
> the queue.
>
>   

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: feature request cache refresh

2010-01-20 Thread Rob S
Poul-Henning Kamp wrote:
> In message <4b56d2b1.9090...@gmail.com>, Rob S writes:
>
>   
>> Our experience of grace is that the first client who requests content 
>> after expiry is held up talking to the backend, whilst other subsequent 
>> clients get delivered the graced stale content.  But, if that's not 
>> intended, perhaps our config isn't quite right.
>> 
>
> That is indeed the correct behaviour.
>   
Great.  So, going back to my original post:

 > I'm not sure if it's best to have
 > a parameter to vary the behaviour of 'grace', or to have an additional
 > parameter for "max age of stale content to serve".


If grace stays as it is, then I'd like to have the ability for that 
"first client who requests content after expiry" to get stale content, 
and have a parallel process update the cached content.

Ro*b*
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: feature request cache refresh

2010-01-20 Thread Rob S
Tollef Fog Heen wrote:
> ]] Rob S 
>
> | Martin Boer wrote:
> | > I would like to see the following feature in varnish;
> | > during the grace period varnish will serve requests from the cache but 
> | > simultaniously does a backend request and stores the new object.
> | >   
> | This would also be of interest to us.  I'm not sure if it's best to have 
> | a parameter to vary the behaviour of 'grace', or to have an additional 
> | parameter for "max age of stale content to serve".
>
> What is the difference between «max age of stale content to serve» and
> grace?  I might not be seeing your use case here?
>
>   
Our experience of grace is that the first client who requests content 
after expiry is held up talking to the backend, whilst other subsequent 
clients get delivered the graced stale content.  But, if that's not 
intended, perhaps our config isn't quite right.

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Time to replace the hit ratio with something more intuitive?

2010-01-19 Thread Rob S
Michael Fischer wrote:
> On Tue, Jan 19, 2010 at 12:09 PM, Nils Goroll  > wrote:
>
> I am suggesting to amend (or replace ?) this figure by a ratio of
> client
> requests being handled by the cache by total number of requests.
> In other words,
> a measure for how many of the client requests do not result in a
> backend request.
>
>
> I vote for the replacement option.  In my view, the ratio should be 
> (total requests)/(requests satisfied from cache).
That'd give odd figures (eg 1.25), when you'd expect to see 0.8.  Can we 
flip it the other way up?

I'd also caution against replacing, as people may have monitoring 
against the old figures...

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Handling of cache-control

2010-01-19 Thread Rob S
Michael Fischer wrote:
> On Mon, Jan 18, 2010 at 4:37 PM, Poul-Henning Kamp  > wrote:
>
> In message  >,
> "Michael S. Fis
> cher" writes:
> >On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
>
> >> My suggestion is to also look at Cache-control: no-cache,
> possibly also
> >> private and no-store and obey those.
> >
> >Why wasn't it doing it all along?
>
> Because we wanted to give the backend a chance to tell Varnish one
> thing with respect to caching, and the client another.
>
> I'm not saying we hit the right decision, and welcome any consistent,
> easily explainable policy you guys can agree on.
>
>
> Well, the problem is that application engineers who understand what 
> that header does have a reasonable expectation that the caches will 
> obey them, and so I think Vanish should honor them as Squid does. 
>  Otherwise surprising results will occur when the caching platform is 
> changed.
>
> Cache-Control: private certainly meets the goal you stated, at least 
> insofar as making Varnish behave differently than the client -- it 
> states that the client can cache, but Varnish (as an intermediate 
> cache) cannot.  
>
> I assume, however, that some engineers want a way to do the opposite - 
> to inform Varnish that it can cache, but inform the client that it 
> cannot.  Ordinarily I'd think this is not a very good idea, since you 
> almost always want to keep the cached copy as close to the user as 
> possible.  But I guess there are some circumstances where an engineer 
> would want to preload a cache with prerendered data that is expensive 
> to generate, and, also asynchronously force updates by flushing stale 
> objects with a PURGE or equivalent.  In that case the cache TTL would 
> be very high, but not necessarily meaningful. 
>
> I'm not sure it makes sense to extend the Cache-Control: header here, 
> because there could be secondary intermediate caches downstream that 
> are not under the engineer's control; so we need a way to inform only 
> authorized intermediate caches that they should cache the response 
> with the specified TTL.  
>
> One way I've seen to accomplish this goal is to inject a custom header 
> in the response, but we need to ensure it is either encrypted (so that 
> non-authorized caches can't see it -- but this could be costly in 
> terms of CPU) or removed by the last authorized intermediate cache as 
> the response is passed back downstream.
>
> --Michael

Michael,

You've obviously got some strong views about varnish, as we've all seen 
from the mailing list over the past few days!

When we deployed varnish, we did so in front of applications that 
weren't prepared to have a cache in front of them.  Accordingly, we 
disabled all caching on HTML and RSS type content in Varnish, and 
instead just cached CSS / JS / images.  This was a good outcome because 
we could stop using round robin DNS (which is a bit questionable, imho, 
if it includes more than two or three hosts) to the web servers, and 
instead just point 2 A records at Varnish.  We elected to use 
X-External-Cache-Control AND X-Internal-TTL as a headers that we'd set 
in Varnish-aware applications.  So, old apps that emit cache-control 
headers are completely uncached by Varnish), and new-apps can benefit to 
a certain degree of caching by Varnish.

PHK's plans for 2010 will enable us to fully exploit our X-Internal-TTL 
headers because it'll be able to parse TTL values out of headers.  In 
the meantime, these are hard-set in Varnish to a value that's 
appropriate for our apps.

The X-External-Cache-Control is then presented as Cache-Control to 
public HTTP requests.

This describes how we've chosen to deploy varnish, without causing our 
application developers huge headaches.  In parallel, we've changed many 
of our sites to use local cookies+javascript to add personalisation to 
the most popular pages.  Overall, deploying Varnish has seen a big 
reduction in back end requests, PLUS the ability to load balance over a 
large pool whilst still implementing sticky-sessions where our apps 
still need them.  Varnish is, as the name suggests, a lovely layer in 
front of our platform which makes it perform better.

Now, to answer your points: 

1) Application developers to be aware of caching headers:  I'd disagree 
here.  Our approach is to use code libraries to deliver functionality to 
the developers which the sysadmins can maintain.  There's always some 
overlap here, but we're comfortable with our position.  We're a PHP 
company, and so we've a class that's used statically, with methods such 
as Cacheability::noCache(), Cacheability::setExternalExpiryTime($secs), 
and Cacheability::setInternalExpiryTime($secs), as well as 
Cacheability::purgeCache($path).  Just as, I'm sure, your developers are 
using abstraction layers for database access, then they could use a 
simila

Re: feature request cache refresh

2010-01-19 Thread Rob S
Martin Boer wrote:
> I would like to see the following feature in varnish;
> during the grace period varnish will serve requests from the cache but 
> simultaniously does a backend request and stores the new object.
>   
This would also be of interest to us.  I'm not sure if it's best to have 
a parameter to vary the behaviour of 'grace', or to have an additional 
parameter for "max age of stale content to serve".
 
> If anyone has a workable workaround to achieve the same results I'm very 
> interested.
>   
Anyone?



Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread Rob S
John Norman wrote:
> Folks,
>
> A couple more questions:
>
> (1) Are they any good strategies for splitting load across Varnish
> front-ends? Or is the common practice to have just one Varnish server?
>
> (2) How do people avoid single-point-of-failure for Varnish? Do people
> run Varnish on two servers, amassing similar local caches, but put
> something in front of the two Varnishes? Or round-robin-DNS?
>   
We're running with two instances and round-robin DNS.  The varnish 
servers are massively underused, and splitting the traffic also means we 
get half the hit rate.  But it avoids the SPOF.

Is anyone running LVS or similar in front of Varnish and can share their 
experience?


Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strange different behavior

2010-01-15 Thread Rob S
Poul-Henning Kamp wrote:
> In message <4b5080b4.5070...@gmail.com>, Rob S writes:
>   
>> Poul-Henning Kamp wrote:
>> 
>>> You really need to find out what bit of user-agent your backend
>>> cares about.  We are talking a multiplication factor of 100-1000 here
>>>   
>> Very slightly off-topic, but is it possible to vary based on a cookie?  
>> I'd rather leave one of our applications to process the user-agent, 
>> login credentials etc, than to move that logic into Varnish.
>> 
>
> I belive so, but the result will probably be that it varies on all your
> cookies...
>   
In the 2010 road map, with the ability to extract particular cookies, do 
you reckon we could do anything to make this work?  So, we'd set a 
cookie called "RenderType", and then Varnish would key on that cookie 
value?  In the short term, can we set the hash to be a combination of a 
cookie value and host and URL?

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strange different behavior

2010-01-15 Thread Rob S
Poul-Henning Kamp wrote:
>
> You really need to find out what bit of user-agent your backend
> cares about.  We are talking a multiplication factor of 100-1000 here
Very slightly off-topic, but is it possible to vary based on a cookie?  
I'd rather leave one of our applications to process the user-agent, 
login credentials etc, than to move that logic into Varnish.

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Vanish with Opera

2010-01-12 Thread Rob S
pub crawler wrote:
> Is there something anyone can thing of that would cause this behavior?

Can you send a packet trace and/or the output of varnishlog for that 
specific XID?


Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish logging and data merging

2010-01-11 Thread Rob S
pub crawler wrote:
> We have a lot of logging going on in our applications. Logs pages, IP
> info, time date, URL parameters, etc.  Since many pages are being
> served out of Varnish cache,  they don't get logged by our
> application.
>
> How is anyone else out there working around this sort of problem with
> an existing application?  Considering a 1x1 graphic file inclusion
> into our pages to facilitate logging and ensuring Varnish doesn't
> cache it
We've not reported analytics based on raw Apache logs for a long time - 
they're far too polluted by spiders etc.  So, instead, for RSS we use 
Feedburner (who provide stats, and reduce our traffic load), and for 
general web access we either use Google Analytics (or commercial 
alternatives).  Google Analytics allows you to add and report on custom 
parameters, so it is very flexible.  However, as you suggested, an 
alternative is to use a tracking pixel.  Depending on how you've 
previously processed your information, you may find it very useful to 
encourage Varnish to route all hits to your tracking pixel to a specific 
server (obviously with failover).  This'd save you having to aggregate 
logs across multiple servers.

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: VCL config file for specific URLs

2010-01-09 Thread Rob S
In our vcl_recv, we use things like:

if (req.url ~ "wp-admin") {
pass;
}
if (req.url ~ "/blog/wp-login.php") {
pipe;
}

// Don't cache from some IPs
if (client.ip == "1.2.3.4") {
pass;
}


Is that any help?


Rob

pub crawler wrote:
> Thanks for your input Rob.
>
> JPEG's, GIFs, etc. are all fine to cache, as they are very static in
> nature in our environment.
> Currently we have Varnish setup to cache:
> ico|html|htm|bmp|png|gif|jpg|jpeg|swf|css|js|rss|xml
>
> Our issue is our app servers get overwhelmed, become a large
> bottleneck and eventually fail under high load.  We can add more
> servers in horizontal scaling mode, but creates more wasted power,
> more machines to maintain, etc.  Our need it to address raised site
> load mostly due to search spiders that are out of control but
> necessary.
>
> So we thought getting Varnish to cache all these dynamic pages would
> alleviate load on our application servers.
>
> Essentially, it is fine for Varnish to cache all of our ColdFusion
> pages (.cfm), except a handful of pages like these:
> http://www.website.com/Template/members/
> http://www.website.com/Template/dsp_addreview.cfm/flat/ID=157
> etc.
>
> Any idea of how to accomplish the cache exception of these handful of pages?
>
>
>   
>> I'm not sure of the best way to supply a large list of URLs to pipe, but I'd
>> suggest that you think about turning the logic around.  I don't know the
>> nature of your site, but presumably it's safe to cache all JPEGs and
>> similar.  How much load would be alleviated by caching everything whose
>> content type is not text/html?
>>
>> Then, for text/html, is it possible for you to edit your backend site and
>> add a header such as "X-Is-Cacheable: yes" in your index.cfm and Review.cfm?
>>  Then, in vcl_fetch, you'd do something like:
>>
>> if content-type = text/html
>>   if X-Is-Cacheable = yes
>>   cache this
>>   else
>>   don't cache
>>   end if
>> else
>>   cache this
>> end if
>>
>>
>> You might find this approach simpler than writing a big long list of pages
>> not to cache.
>>
>>
>> Rob
>>
>> 
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>   


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: VCL config file for specific URLs

2010-01-09 Thread Rob S
pub crawler wrote:
> That works fine, but we want Varnish to only do this on certain pages.
>
> More simply and shorter in number we want Varnish to adhere to the
> cookies only on certain user customizeable pages.
>
> For example (cache these in Varnish):
> http://www.website.com/Template/index.cfm
> http://www.website.com/Template/Review.cfm/flat/ID=
>
> DO NOTE CACHE THESE - pipe to backend:
> http://www.website.com/Template/dsp_addreview.cfm
> http://www.website.com/Template/mapinput.cfm
>
> Any ideas on how to best go about providing a list of URLs to pipe in
> our default.vcl file?
>   
I'm not sure of the best way to supply a large list of URLs to pipe, but 
I'd suggest that you think about turning the logic around.  I don't know 
the nature of your site, but presumably it's safe to cache all JPEGs and 
similar.  How much load would be alleviated by caching everything whose 
content type is not text/html?

Then, for text/html, is it possible for you to edit your backend site 
and add a header such as "X-Is-Cacheable: yes" in your index.cfm and 
Review.cfm?  Then, in vcl_fetch, you'd do something like:

if content-type = text/html
if X-Is-Cacheable = yes
cache this
else
don't cache
end if
else
cache this
end if


You might find this approach simpler than writing a big long list of 
pages not to cache.


Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Architectural heads-up/call for comments

2010-01-06 Thread Rob S
Poul-Henning Kamp wrote:
> 1.  Kill the magic default VCL.
>   
This will make life a lot simpler for people starting out with Varnish, 
and help ensure that those people with advanced configurations don't 
overlook something.  I'd like to mutter something here about adding some 
form of "return pass", rather than just writing "pass".  This will 
further clear up the fact that execution will, at the time your write 
"pass" cease.  (Please let me know if I've got this wrong).

> 2.  Client identity
>   
Access to cookies is great, but I don't necessarily think that a 
client.ident idea is sensible.  Is the "client" object going to be 
persistent between requests?  If so, what other properties will it 
have?  If it isn't persistent, then with existing VCL you can create a 
.ident property of a req, and use that to switch between pools.  Is this 
point really that you want to establish a mechanism for passing 
information into more advanced directors? 

> 3. Synth replies (and vcl_error{} ?)
>
> I want to make it possible to produce a synthetic reply anywhere
> in VCL...
Great.  Definitely useful.  A use case we had recently, but which we 
couldn't implement, was to synthetically generate some information about 
the backend pool.  So, we wanted a request for a particular javascript 
file to be generated by Varnish and say 
"randomhealthbackendhostname='"+be.name+"';".  We couldn't do this, but 
it might help guide thinking as to how this might get used.

> I also want to make it possible to suck in the synth body from a
> file
>   
Make sure we can specify the mime type of the synthetic reply!

> (The file will be read at compile-timer!)
>   
Nervous of this.

> 4. VCL conversions
>   
This would definitely be worth doing.  We've got a few bits of config 
which could be enormously improved if we could parse header responses 
and put them as TTL values etc.  We use varnish in front of some apps 
which use session state / cookies / other things that make caching 
hard.  So, for apps we've been through and sanitised for varnish, we 
specify X-Varnish-Cache-Control which we parse in VCL and use to control 
Varnish's own data store, and X-External-Cache-Control, which is what we 
present out to the end requestor.  If the response only has 
X-Cache-Control, we assume that the app hasn't been 
prepared-for-varnish, and thus mustn't be cached by varnish, regardless 
of what's specified.  This allows us to fail-safe, but has the impact 
that we currently can't set the ttl to the precise value we want.  Your 
proposed conversions would enable us to improve our config.
> Thanks for listening, now it's your turn to tell me if this is
> stupid...
>   
Definitely good stuff.  Let us know how we can continue to help.

There's one other thing that I'd like people to comment / think about.  
We operate multiple varnish front ends, so that we can cope if one 
fails.  How do people distribute purges between them?


Rob

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: varnishtest

2010-01-04 Thread Rob S
Tollef Fog Heen wrote:
> ]] ll 
>
> | Is there any manual about the command varnishtest ? I useing varnish
> | 2.0.4 edition .
> | and I want to have a script to test the backend whether the backend
> | server is normal .I know the varnish can test it ,and configure some
> | thing to change the backend when it test is failed.so I want to test it
> | and send a mail or some other log can be trace .
>
> It sounds more like you are looking for a monitoring solution like
> nagios than varnishtest.  Varnishtest is used for testing that varnish
> works correctly and prevent regressions.
>
>   
You might want to look at earlier discussions about automatically 
monitoring the health of backends.  This was on the mailing list a few 
months ago - see
http://www.mail-archive.com/varnish-misc@projects.linpro.no/msg03169.html

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Monitoring backend status

2009-12-09 Thread Rob S
Poul-Henning Kamp wrote:
> In message <4b1f7fed.4020...@boerse-go.de>, Mark Plomer writes:
>   
>> Hi,
>> is it possible to retrieve some more details about the current backend 
>> status via the telnet interface?
>> A list of all backends with current status (up/down) would be very 
>> helpful for monitoring.
>> For the beginning, it would be enough to have the count of backends and 
>> the count of up-/down- backends
>> in the "stats" output.
>> 
>
> There is an undocumented "debug.health" command you can try.
>
> Ideas for a final version of this are most welcom.
>
>   
Brilliant.  At the moment we tail the varnishlog, post process, and 
write out to disk for further monitoring.  Here's the PHP we're using:

#!/usr/local/bin/php


Our monitoring then records "grep -c healthy /tmp/varnishbackendhealth"

However, it looks like we can now use the far simpler:

  echo 'debug.health' | nc localhost 6082 |grep -c Healthy

So - thanks for the change!



Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Excluding url's from varnishhist

2009-10-08 Thread Rob S
Tollef Fog Heen wrote:
> ]] Paul Dowman 
>
> | Hi,
> | I'm having trouble figuring out how to exclude certain URL's from
> | varnishhist. I want to exclude static files, e.g. urls that match a pattern
> | like /\.png|\.gif|\.js|\.css|\.ico/ (because these don't cause much load on
> | the back-end, I want to see only the requests that would hit my app
> | servers).
> | 
> | I know about the -X regex argument, but it doesn't seem to do what I want.
> | Actually I don't really understand what it does, the man page says that it
> | excludes "log entries" that match a pattern, but as far as I can tell it
> | doesn't match URL's.
> | 
> | What's the right way to do this?
>
> There's currently no way to do this, I'm afraid.  varnishtop -b -i TxURL
> gives you the URLs, but not any timing information.
>   
I realise it'd produce slightly different information, but you could 
create a separate backend director for files that match this regex, then 
look at varnishncsa for those particular backends...

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Cookies being sent back with a cached response

2009-10-06 Thread Rob S
Cosimo Streppone wrote:
> Rob wrote:
>
>> I'd suggest you add the varnish ID in the Apache log files (or 
>> similar), then wait for a user to report the problem.
>
> That's interesting.
> I didn't think of cross-referencing apache logs.
You can log the varnish request in the Apache logs with something like:

LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"
%D %{Host}i %A \"%{X-Varnish}i\""  combined

>
>> work out what their user id should be, find their access in the 
>> varnish logs, and play back the log file to see the back end request.
>
> Are you referring to the varnishlog output, or the varnishncsa output?
> What's the common practice? Archiving varnishlog or varnishncsa (or
> something else) output?
I was referring to the varnishlog output.  You can replay a request with
something like:

varnishlog -r /var/log/varnish/varnish.log -c -o TxHeader 264247211

As for storage, all I can tell you is what we do - store both, rotate
daily or weekly, and keep historically as appropriate.
>
>> You'll end up kicking yourself once you find the problem  we 
>> certainly have on occasion.
>
> I hope so :)
> Thanks for your ideas.
>


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Cookies being sent back with a cached response

2009-10-05 Thread Rob S
Cosimo Streppone wrote:
> ...
>
> Thank you for reading through this, I would
> appreciate any help from you or hints on how to further
> debug this.
>   
Cosimo,

I'd suggest you add the varnish ID in the Apache log files (or similar), 
then wait for a user to report the problem.  Once they've reported it, 
work out what their user id should be, find their access in the varnish 
logs, and play back the log file to see the back end request.  Hopefully 
this'll narrow it down. 

If the user id is available in a cookie, then you could write some 
javascript to detect the situation that the image URL doesn't match the 
user id in the cookie, and then log extra information into a logging script.

You'll end up kicking yourself once you find the problem  we 
certainly have on occasion.

Just a few ideas.

Rob

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish, long lived cache and purge on change

2009-08-19 Thread Rob S
Poul-Henning Kamp wrote:
> My only worry is that it adds a linked list to the objcore structure
> taking it from 88 to 104 bytes.
>   
I realise this could be undesirable, but at the moment varnish is 
proving quite difficult to use in sites that frequently purge, with 
different users adding their own workarounds (versioning URLs, 
restarting Varnish, tweaking the hash key etc).  Everything is a trade 
off, but I think it's desirable to increase the memory footprint per 
object so as to not bring down the server with massive memory growth.

> Probably the more interesting question is how aggressive you want it to
> be: if it is too militant, it will cause a lot of needless disk activity
I feel that some sort of hysteresis on the size of the purge list would 
make most sense, perhaps starting to process if the list exceeds more 
than X bytes, and stop when the list is < Y bytes.

Having thought a little more about this, I realise I don't know whether 
graced requests respect bans.  If they don't, then processing the ban 
list will change Varnish's behaviour. 

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish, long lived cache and purge on change

2009-08-19 Thread Rob S
phk and other deep Varnish developers,

Do you think it'd ever be viable to have a sort of process that goes 
through the tail of the purge queue and applies the purges then deletes 
them from the queue?  If so, how much work would it be to implement?  
There are a fair number of us who would really appreciate something like 
this, and I'm sure would make a contribution if someone was to implement 
something.

Thanks,


Rob

Karl Pietri wrote:
> Hey Ken, =)
> Yeah this is what i was afraid of.  I think we have a work around 
> by normalizing the hash key to a few select things we want to support 
> and on change setting the ttl of those objects to 0.  This would avoid 
> using the url.purge.  All of our urls in this case are pretty, and not 
> images.
>
> Thanks for the great info and sorry about the 4th thread on the 
> subject, i did not search thoroughly enough in the archives. 
>
> -Karl
>
> On Tue, Aug 18, 2009 at 4:34 PM, Ken Brownfield  > wrote:
>
> Hey Karl. :-)
>
> The implementation of purge in Varnish is really a queue of
> refcounted ban objects.  Every image hit is compared to the ban
> list to see if the object in cache should be reloaded from a backend.
>
> If you have purge_dups off, /every/ request to Varnish will regex
> against every single ban in the list.  If you have purge_dups on,
> it will at least not compare against duplicate bans.
>
> However, a ban that has been created will stay around until
> /every/ object that was in the cache at the time of that ban has
> been re-requested, dupe or no.  If you have lots of content,
> especially content that may not be accessed very often, the ban
> list can become enormous.  Even with purge_dups, duplicate ban
> entries remain in memory.  And the bans are only freed from RAM
> when their refcount hits 0 /AND/ they're at the very tail end of
> the ban queue.
>
> Because of the implementation, there's no clear way around this
> AFAICT.
>
> You can get a list of bans with the "purge.list" management
> command, but if it's more than ~2400 long you'll need to use
> netcat to get the list.  Also, purged dups will NOT show up in
> this list, even though they're sitting on RAM.  I have a trivial
> patch that will make dups show up in purge.list if you'd like to
> get an idea of how many bans you have.
>
> The implementation is actually really clever, IMHO, especially
> with regard to how it avoids locks, and there's really no other
> scalable way to implement a regex purge that I've been able to
> dream up.
>
> The only memory-reducing option within the existing implementation
> is to actually delete/free duplicate bans from the list, and to
> delete/free bans when an object hit causes the associated ban's
> refcount to hit 0.  However, this requires all access to the ban
> list to be locked, which is likely a significant performance hit.
>  I've written this patch, and it works, but I haven't put
> significant load on it.
>
> I'm not sure if Varnish supports non-regex/non-wildcard purges?
>  This would at least not have to go through the ban system,  but
> obviously it doesn't work for arbitrary path purges.
>
> We version our static content, which avoids cache thrash and this
> purge side-effect.  This is very easy if you have a central
> URL-generation system in code (templates, ajax, etc), but probably
> more problematic in situations where the URL needs to be "pretty".
>
> Ken
>
> On Aug 18, 2009, at 4:06 PM, Karl Pietri wrote:
>
>> Hello everyone,
>> Recently we decided that our primary page that everyone views
>> doesn't really change all that often.  In fact it changes very
>> rarely except for the stats counters (views, downloads, etc).  So
>> we decided that we wanted to store everything in varnish for a
>> super long time (and tell the client its not cacheable or
>> cacheable for a very short amount of time), flush the page from
>> varnish when it truly changes and have a very fast ajax call to
>> update the stats.  This worked great for about 2 days.   Then we
>> ran out of ram and varnish started causing a ton of swap activity
>> and it increased the response times of everything on the site to
>> unusable.
>>
>> After poking about i seem to have found the culprit.  When you
>> use url.purge it seems to keep a record of that and check every
>> object as it is fetched to see if it was purged or not.  To test
>> this i set a script to purge a lot of stuff and got the same
>> problem to happen.
>>
>>
>> from varnishstat -1
>>
>>  n_purge236369  .   N total active purges
>> n_purge_add236388 2.31 N new purges added
>> n_purge_retire 19 0.00 N old purges deleted
>> n_purge_obj_test

Re: Proxy from and ordered list of web server until one of them send a 200 status code

2009-08-18 Thread Rob S
Yann Malet wrote:
> Browser request the page :  *frontend:8080/foo/bar*
> This request reach the frontend:8080, it looks if the page is the 
> cache. If the page is in the cache it serves it from there else it 
> sends the request to* weserver_new_cms:8081*. There are 2 cases there 
> the page exists or  not. If the page exists it serves the page to the 
> frontend that puts it in the cache and sends it to the client. If the 
> page does not exist it means that weserver_new_cms:8081 returns 404 
> the frontend should reverse proxy *to webserver_old_cms:8082. *There 
> is  again 2 cases there the page exists or it doesn't. If the page 
> exists it serves the page to the frontend that puts it in the cache 
> and send it to the client. If the page does not exist it returns a 404 
> error to the client because the page does not exist in any (new, old) cms.
>
> It seems to me that *vcl_fetch* is the right place to hook this logic 
> but so far I have no idea on how to write this. Some help/ guidance 
> would be very appreciated.
>
Yann,

Varnish can definitely do this, and by default Varnish will serve from 
its cache anything that is there.  So, you just need to worry about the 
"it's not in the cache" scenario, and instead do something like the 
following.  First, you'll need to define your backend nodes:

   backend oldcmsnode { .host = "webserver_old_cms"; .port="8082"; }
   backend newcmsnode { .host = "webserver_new_cms"; .port="8081"; }

   director oldcms random {
  { .backend = oldcmsnode ; .weight = 1; }
   }

   director newcms random {
  { .backend = newcmsnode ; .weight = 1; }
   }

then, at the top of sub vcl_recv, we say "If we're trying for the first 
time, use the newcmsnode, otherwise use the oldcmsnode"

   set req.backend = newcmsnode;
   if (req.restarts > 0) {
  set req.backend = oldcmsnode;
   }

in vcl_fetch, put some logic to say "if we got a 404, and it was our 
first attempt (and therefore we're using the newcmsnode), we should 
restart and try again".

   if (obj.status == 404 && req.restarts==0) {
  restart;
   }

I hope this points you in the right direction.



Rob


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Varnish 503 Service unavailable error

2009-08-13 Thread Rob S
Paras Fadte wrote:
> Hi,
>
> Sometimes I receive 503 service unavailable error even though there
> are 4 backends . This would mean that all the backends are unavailable
> at a given time  which I don't think is the case . 
Can you replay your varnishlog file, and look for Backend_health items, 
and confirm that they did all go sick at the same time?  If they did, 
then you'll need to look at your backends themselves.  Are you 
separately monitoring them with Nagios, Zabbix, Pingdom or something 
like that?

If replaying varnishlog shows they weren't sick, then I suggest you get 
the varnish transaction ID from one of these 503 errors, and then 
extract the relevant portion of the varnishlog.  This might help explain 
the path taken by your request through the VCL, and help you diagnose a 
logic problem.

Finally, we define all our backends as being monitored by probe, but 
also redefine them without a probe. 

director failsafepool random {
{ .backend = serverAfailsafe; .weight = 1; }
{ .backend = serverBfailsafe; .weight = 1; }
{ .backend = serverCfailsafe; .weight = 1; }
{ .backend = serverDfailsafe; .weight = 1; }
}

We then use logic like:

set req.backend = monitoredpool;

if (!req.backend.healthy) {
set req.backend = failsafepool;
}

You can then look in your varnishncsa log to see if it used the normal 
or failsafe backends were used.


Rob

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: abnormally high load?

2009-08-12 Thread Rob S
Jeremy Hinegardner wrote:
> Hi all,
>
> I'm trying to figure out if this is a normal situation or not.  We have a
> varnish instance in front of 12 tokyo tyrant instances with some inline C 
> in the VCL to determine which backend to talk to.
>
>   

If you restart varnish during one of these spikes, does it instantly 
disappear?  I've seen this happen (though only spiking to about 12), and 
this is when Varnish has munched through far more memory than we've 
allocated it.  This problem is one I've been looking into with Ken 
Brownfield, and touches on 
http://projects.linpro.no/pipermail/varnish-misc/2009-April/002743.html 
and http://projects.linpro.no/pipermail/varnish-misc/2009-June/002840.html

Do any of these tie up with your experience?


Rob

>

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Welcome back from vacation!

2009-08-04 Thread Rob S
Dag-Erling Smørgrav wrote:
> "Poul-Henning Kamp"  writes:
>   
>> "Dag-Erling Smørgrav"  writes:
>> 
>>> Hmm, UK, September...  Why not Cambridge?
>>>   
>> Because London is where the bloke who stuck his hand up thought he
>> could do it.
>> 
>
> How about putting him in touch with rwatson@ and see if something can be
> arranged?
>
> DES
>   
Morning all.  I'm the brave person who raised my hand.  Could anyone who 
thinks they'd like to assist (no matter how small) with the organisation 
send me an email over the next day or two to introduce yourself?  I 
think the general flow of the organisation should be as follows:

1) Find people who might like to help
2) Write a quick questionnaire to go to the mailing lists asking people 
what they want for the conference
3) Review the answers, and get organising.

To introduce myself:  We selected varnish to provide load balancing and 
back-end-failure-tolerance on a hosting platform we run for a 
newspaper's website.  Whilst Varnish operates well, there are a number 
of little problems we run in to which I'm sure everyone else has 
encountered.  A user-group meeting could help exchange experiences and 
answers between people.

My ideas for the first user-group meeting - I think we could discuss any 
of the following:
 * General networking and chatting with other users over a drink or two.
 * The basics: An introduction to varnish
 * VCL tips and tricks: What cunning things have people done?
 * What people have achieved using inline C?
 * Consider spinning off a quick session to discuss re-structuring the 
wiki to make things easier to find
 * A "Get problems off your chest" opportunity.  Are there things that 
people have small issues with but which they've not raised tickets for, 
because they think they're minor?
 * An opportunity to thank phk in person for all the hard work he's put 
in.a

But, obviously these are my ideas.  What do other people want to do or 
discuss?

Finally, I'm away from 29th Aug through 28th September.  So, 
realistically, it'll have to be mid-late October.



Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Memory spreading, then stop responding

2009-07-29 Thread Rob S

 Thanks Darryl.  However, I don't think this solution will work in our
 usage.  We're running a blog.  Administrators get un-cached access,
 straight through varnish.  Then, when they publish, we issue a purge
 across the entire site.  We need to do this as there's various bits of
 navigation that'd need to be updated.  I can't see that we can do this
 if we set obj.ttl.

 Has anyone any recommendations as to how best to deal with purges 
 like this?
>>>
>>> If you're issuing a PURGE across the entire site, why not simply 
>>> restart Varnish with an empty cache?
>>>
>>> --Michael
>>>
>> Because Varnish is also working for other hosts which don't need 
>> purging at the same time...
>
> My company gets around this madness by versioning its URLs.  It works 
> pretty well.
>
> --Michael


Thanks.  Are there any varnish developers who can comment on this 
memory-usage-growth-when-purging?  I can't see any open tickets for 
this, and I'm sure there are several mailing list members who might like 
to contribute a bounty for development of a fix to this.

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Memory spreading, then stop responding

2009-07-28 Thread Rob S
Michael S. Fischer wrote:
> On Jul 28, 2009, at 2:35 PM, Rob S wrote:
>> Thanks Darryl.  However, I don't think this solution will work in our
>> usage.  We're running a blog.  Administrators get un-cached access,
>> straight through varnish.  Then, when they publish, we issue a purge
>> across the entire site.  We need to do this as there's various bits of
>> navigation that'd need to be updated.  I can't see that we can do this
>> if we set obj.ttl.
>>
>> Has anyone any recommendations as to how best to deal with purges 
>> like this?
>
> If you're issuing a PURGE across the entire site, why not simply 
> restart Varnish with an empty cache?
>
> --Michael
>
Because Varnish is also working for other hosts which don't need purging 
at the same time...

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Memory spreading, then stop responding

2009-07-28 Thread Rob S
Darryl Dixon - Winterhouse Consulting wrote:
>> Darryl Dixon - Winterhouse Consulting wrote:
>> 
 
 Can anyone suggest why varnish is using more memory than it's
 allocated,
 and why varnishlog would stop returning any output?  Varnishlog was
 writing to disk, so I can probably extract the end of that, if it's of
 use.


 
>>> Hi Rob,
>>>
>>> There have been a few threads about this now on this mailing list.
>>> Probably it relates to the use of purge_url in your VCL. Are you using
>>> this function at all?
>>>
>>> regards,
>>> Darryl
>>>   
>> Darryl,
>>
>> Thanks for your reply.  Yes we are using purge_url, but I was under the
>> impression that since http://varnish.projects.linpro.no/changeset/3329,
>> there wasn't a problem.  I've not succeeded in finding the threads you
>> mentioned in your email.  Can you either point me at them, or let me
>> know their conclusion?
>>
>> 
>
> Hi Rob,
>
> See the thread concluding here (the solution to swap purge_url for
> obj.ttl=0 is the correct one):
> http://projects.linpro.no/pipermail/varnish-misc/2009-April/002743.html
>
> And also the thread concluding here:
> http://projects.linpro.no/pipermail/varnish-misc/2009-June/002840.html
>
> regards,
> Darryl Dixon
> Winterhouse Consulting Ltd
> http://www.winterhouseconsulting.com
>
>
>   
Thanks Darryl.  However, I don't think this solution will work in our 
usage.  We're running a blog.  Administrators get un-cached access, 
straight through varnish.  Then, when they publish, we issue a purge 
across the entire site.  We need to do this as there's various bits of 
navigation that'd need to be updated.  I can't see that we can do this 
if we set obj.ttl.

Has anyone any recommendations as to how best to deal with purges like this?


Rob

___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Memory spreading, then stop responding

2009-07-28 Thread Rob S
Darryl Dixon - Winterhouse Consulting wrote:
>> 
>> Can anyone suggest why varnish is using more memory than it's allocated,
>> and why varnishlog would stop returning any output?  Varnishlog was
>> writing to disk, so I can probably extract the end of that, if it's of
>> use.
>>
>> 
>
> Hi Rob,
>
> There have been a few threads about this now on this mailing list.
> Probably it relates to the use of purge_url in your VCL. Are you using
> this function at all?
>
> regards,
> Darryl
Darryl,

Thanks for your reply.  Yes we are using purge_url, but I was under the 
impression that since http://varnish.projects.linpro.no/changeset/3329, 
there wasn't a problem.  I've not succeeded in finding the threads you 
mentioned in your email.  Can you either point me at them, or let me 
know their conclusion?

Thanks,


Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Memory spreading, then stop responding

2009-07-27 Thread Rob S
Hi,

Here's my setup:

[r...@varnish1 ~]# rpm -qa |grep varnish
varnish-libs-2.0.4-1.el5
varnish-2.0.4-1.el5
[r...@varnish1 ~]# uname -a
Linux varnish1.example.com 2.6.18-128.el5 #1 SMP Wed Jan 21 10:41:14 EST 
2009 x86_64 x86_64 x86_64 GNU/Linux
[r...@varnish1 ~]# ps aux |grep varnishd
root 27993  0.0  0.0 106472   816 ?Ss   17:42   0:00 
/usr/sbin/varnishd -P /var/run/varnish.pid -a 10.1.2.51:80 -T :6082 -f 
/etc/varnish/default.vcl -u varnish -g varnish -s 
file,/var/lib/varnish/varnish_storage.bin,1G
varnish  28063  0.9  1.0 1474728 62860 ?   Sl   17:43   0:06 
/usr/sbin/varnishd -P /var/run/varnish.pid -a 10.1.2.51:80 -T :6082 -f 
/etc/varnish/default.vcl -u varnish -g varnish -s 
file,/var/lib/varnish/varnish_storage.bin,1G
root 28799  0.0  0.0  61192   732 pts/3S+   17:56   0:00 grep 
varnishd

The problem that I've encountered twice now is the following:

1) Varnish spreads to use over 8GB of swap, despite appearing to be 
configured to only use 1GB of storage
2) Our automated monitoring indicates that we're running out of swap space.
3) Restart varnish
4) From this point, varnishlog and varnishncsa return no output.

Can anyone suggest why varnish is using more memory than it's allocated, 
and why varnishlog would stop returning any output?  Varnishlog was 
writing to disk, so I can probably extract the end of that, if it's of use.

Very grateful to anyone who can point me in the right direction.




Rob


___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: tcp reset problem with varnish 2.0.4 o n Solaris 10 (SPARC)

2009-07-06 Thread Rob S
Alex Hooper wrote:
> I wonder does anyone have an idea of what might be happening?
>   
Alex,

I've not seen this before, but I've found that 'varnishlog' typically 
provides very helpful information.  Can you post a log of the request?

Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: tcp reset problem with varnish 2.0.4 o n Solaris 10 (SPARC)

2009-07-06 Thread Rob S
Alex Hooper wrote:

 > 5 VCL_call c recv
 > 5 VCL_return   c pass
 > 5 VCL_call c pass
 > 5 VCL_return   c pass
 > 5 VCL_call c error
 > 5 VCL_return   c deliver


It looks like you're using "pass", rather than "fetch", which probably 
isn't desirable when you're just doing a simple GET request.  I'd expect 
to see something like:

7 VCL_call c recv
7 VCL_return   c lookup
7 VCL_call c hash
7 VCL_return   c hash
7 VCL_call c miss
7 VCL_return   c fetch

Can you send your VCL file, so that I can take a look at the logic?


Rob
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc