Re: health check path doesn't change after VCL reload (2.0.6)

2010-02-24 Thread John Norman
No, only the former / old path.

I'm not super-troubled right now because a Varnish restart did pick up the
new path (but at the cost of my cache) -- but I'm a bit worried about the
next time I have to change it.

I will be changing the probe interval soon, so that will give me a chance to
reproduce the problem, if it even exists.

As a bit of background:

I automate the VCL update to multiple servers, when/if the VCL file has
changed.

Before the update, I also remove all of the inactive/old VCL's that are
sitting there.

Then I add the new one and "use" it.

When I observed in my backend logs the probes going to the old URLs, I did
check the "active" VCL on all systems, and they all showed the new path.

In any case, I will try to reproduce and will send the results.

One last thing: During the restart on one system, I observed the issue
reported here:
http://zarathustrashallspeak.com/2009/11/28/varnish-startup-issue/

John

On Wed, Feb 24, 2010 at 4:18 PM, Poul-Henning Kamp wrote:

> In message ,
> John N
> orman writes:
>
> >Still, the VCL indicated as "active" had a different path for the health
> >check.
>
> Hopefully both got probed ?
>
> --
> Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
> p...@freebsd.org | TCP/IP since RFC 956
> FreeBSD committer   | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


host header altered by Varnish?

2010-02-24 Thread John Norman
Sorry about all the questions . . .

On my backend I want to redirect domain.com to www.domain.com

I see Host: domain.com in both the RX and TX sections of the log . . . but
the redirect isn't getting triggered.

The backend is Apache, and the redirect directives are routine.

  RewriteCond %{HTTP_HOST} ^domain.com$ [NC]
  RewriteRule ^(.*)$ http://www.domain.com$1 [R=301,L]

Am I missing something?

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: health check path doesn't change after VCL reload (2.0.6)

2010-02-24 Thread John Norman
That's great.

Still, the VCL indicated as "active" had a different path for the health
check.

On Wed, Feb 24, 2010 at 3:24 PM, Poul-Henning Kamp wrote:

> In message ,
> John N
> orman writes:
>
> >We notice that after VCL is reloaded, our old health check path is still
> >getting checked.
> >
> >The only thing that seems to fix it is a varnish restart.
>
> No, unloading the old VCL code should also do it.
>
> We keep polling the backends of all loaded VCL, so they are all
> ready to roll the moment you do "vcl.use mumble".
>
>
> --
> Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
> p...@freebsd.org | TCP/IP since RFC 956
> FreeBSD committer   | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


health check path doesn't change after VCL reload (2.0.6)

2010-02-24 Thread John Norman
Hi.

We notice that after VCL is reloaded, our old health check path is still
getting checked.

The only thing that seems to fix it is a varnish restart.

Seems like I should log this as a bug . . . ?

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Not seeing a successful purge

2010-02-16 Thread John Norman
Thanks Ken, Laurence, and Tollef.

I'm going to add the normalization for gzip/deflate in vcl_recv

But for user-agent:

While my backend does say: "Vary: User-Agent" I do also have this in vcl_recv:

  unset req.http.user-agent;

Isn't that enough (i.e., if I unset req.http.user-agent in my VCL, can
I leave Vary: User-Agent on the backend)? I ask because it may be
problematic to fix the backend in this case.

We have no content that differs depending on the user agent.

On Mon, Feb 15, 2010 at 3:29 AM, Tollef Fog Heen
 wrote:
>
> Yes, this means that if you know your backend only cares about
> gzip/non-gzip, you should normalise the header in vcl_recv.  Varnish
> can't know this, as it requires knowledge of your backend and
> application.
>
> However, even if you changed that, you still have a Vary on
> user-agent.
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Not seeing a successful purge

2010-02-12 Thread John Norman
I think it's the backend's (Apache/Passenger) header:

Vary: Accept-Encoding,User-Agent

Which seems to prevent (???) this from working in my vcl_hash:

  if (req.http.Accept-Encoding ~ "gzip") {
set req.hash += "gzip";
  } // etc

The Varnish doc says: "But by default, Varnish will perform no
transforms on the headers singled out by Vary: for comparison"
(http://varnish-cache.org/wiki/ArchitectureVary).

So . . . I'm not sure what I should do. If the backend says "Vary" for
Accept-Encoding, does that mean that I should or should not be able to
access that header for the purposes of setting the hash?

What I am observing is:

browser makes request with Accept-Encoding: gzip,deflate

When I try to purge, the request with the purge says: Accept-Encoding:
gzip,identity

Even though "gzip" is in the request for both the browser
Accept-Encoding, and for the purge, they seem to be getting hashed
differently.

If when I do a purge, I force Accept-Encoding: gzip,deflate, then it
matches what the browser did exactly, and I am able to purge
successfully.

On Fri, Feb 12, 2010 at 11:54 AM, John Norman  wrote:
> Here's a bit more on my "purge" problem -- a comparison of a purge
> that works on my development machine, vs. one that doesn't work on my
> staging system.
>
> On both, the browser request goes to haproxy, then to varnish. The VCL
> is identical, as quoted in a prior e-mail. The backends are different:
> on my local, it's the Ruby webrick server; on staging, it's
> Apache+Passenger.
>
> Again, I'm not purging through varnishadm: This is using the
> pseudo-http-method PURGE.
>
> One thing I can say about the staging environment if that if I do a
> non-browser request using mechanize from the staging system itself,
> then the later purge DOES work. In that case, the two differences are:
> The user-agent is the same for both the get and the purge; and
> requesting IP would be the same for both the get and the purge.
>
> Here are the log details.
>
> DEVELOPMENT -- first the browser request, showing the hit; then the
> purge, showing the hit. Awesome!
>
> 11 ReqStart     c 127.0.0.1 50808 1691945259
> 11 RxRequest    c GET
> 11 RxURL        c /products/sillyputty
> 11 RxProtocol   c HTTP/1.1
> 11 RxHeader     c Host: localhost
> 11 RxHeader     c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS
> X 10.5; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB7.0
> 11 RxHeader     c Accept:
> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
> 11 RxHeader     c Accept-Language: en-us,en;q=0.5
> 11 RxHeader     c Accept-Encoding: gzip,deflate
> 11 RxHeader     c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
> 11 RxHeader     c Keep-Alive: 300
> 11 RxHeader     c Connection: close
> 11 RxHeader     c Referer: http://localhost/
> 11 RxHeader     c Cookie:
> remember_token=f27172bfab54dc47d20b6d8c853afb8677fa2d11
> 11 RxHeader     c X-Forwarded-For: 127.0.0.1
> 11 VCL_call     c recv
> 11 VCL_return   c lookup
> 11 VCL_call     c hash
> 11 VCL_return   c hash
> 11 Hit          c 1691945214
> 11 VCL_call     c hit
> 11 VCL_return   c deliver
> 11 Length       c 201518
> 11 VCL_call     c deliver
> 11 VCL_return   c deliver
> 11 TxProtocol   c HTTP/1.1
> 11 TxStatus     c 200
> 11 TxResponse   c OK
> 11 TxHeader     c Cache-Control: max-age=8280, public
> 11 TxHeader     c X-Runtime: 818
> 11 TxHeader     c Content-Type: text/html; charset=utf-8
> 11 TxHeader     c Etag: "f29fbc0160d276fb97a298bf5bce8ff3"
> 11 TxHeader     c Server: WEBrick/1.3.1 (Ruby/1.9.1/2009-07-16)
> 11 TxHeader     c Content-Length: 201518
> 11 TxHeader     c Date: Fri, 12 Feb 2010 16:19:33 GMT
> 11 TxHeader     c X-Varnish: 1691945259 1691945214
> 11 TxHeader     c Age: 14
> 11 TxHeader     c Via: 1.1 varnish
> 11 TxHeader     c Connection: close
> 11 ReqEnd       c 1691945259 1265991573.541040897 1265991573.547173977
> 0.77009 0.55075 0.006078005
>
> 11 ReqStart     c ::1 51006 1691945309
> 11 RxRequest    c PURGE
> 11 RxURL        c /products/sillyputty
> 11 RxProtocol   c HTTP/1.1
> 11 RxHeader     c Accept: */*
> 11 RxHeader     c User-Agent: WWW-Mechanize/1.0.0
> (http://rubyforge.org/projects/mechanize/)
> 11 RxHeader     c Connection: keep-alive
> 11 RxHeader     c Keep-Alive: 300
> 11 RxHeader     c Accept-Encoding: gzip,identity
> 11 RxHeader     c Accept-Language: en-us,en;q=0.5
> 11 RxHeader     c Host: localhost:8000
> 11 RxHeader     c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
> 11 VCL_call     c recv
> 11 VCL_return   c lookup
> 11 VCL_call     c hash
> 11 VCL_return   c hash
> 11 Hit          c 1691945214
> 11 VCL_call     c hit

Re: Not seeing a successful purge

2010-02-12 Thread John Norman
Here's a bit more on my "purge" problem -- a comparison of a purge
that works on my development machine, vs. one that doesn't work on my
staging system.

On both, the browser request goes to haproxy, then to varnish. The VCL
is identical, as quoted in a prior e-mail. The backends are different:
on my local, it's the Ruby webrick server; on staging, it's
Apache+Passenger.

Again, I'm not purging through varnishadm: This is using the
pseudo-http-method PURGE.

One thing I can say about the staging environment if that if I do a
non-browser request using mechanize from the staging system itself,
then the later purge DOES work. In that case, the two differences are:
The user-agent is the same for both the get and the purge; and
requesting IP would be the same for both the get and the purge.

Here are the log details.

DEVELOPMENT -- first the browser request, showing the hit; then the
purge, showing the hit. Awesome!

11 ReqStart c 127.0.0.1 50808 1691945259
11 RxRequestc GET
11 RxURLc /products/sillyputty
11 RxProtocol   c HTTP/1.1
11 RxHeader c Host: localhost
11 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS
X 10.5; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB7.0
11 RxHeader c Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
11 RxHeader c Accept-Language: en-us,en;q=0.5
11 RxHeader c Accept-Encoding: gzip,deflate
11 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
11 RxHeader c Keep-Alive: 300
11 RxHeader c Connection: close
11 RxHeader c Referer: http://localhost/
11 RxHeader c Cookie:
remember_token=f27172bfab54dc47d20b6d8c853afb8677fa2d11
11 RxHeader c X-Forwarded-For: 127.0.0.1
11 VCL_call c recv
11 VCL_return   c lookup
11 VCL_call c hash
11 VCL_return   c hash
11 Hit  c 1691945214
11 VCL_call c hit
11 VCL_return   c deliver
11 Length   c 201518
11 VCL_call c deliver
11 VCL_return   c deliver
11 TxProtocol   c HTTP/1.1
11 TxStatus c 200
11 TxResponse   c OK
11 TxHeader c Cache-Control: max-age=8280, public
11 TxHeader c X-Runtime: 818
11 TxHeader c Content-Type: text/html; charset=utf-8
11 TxHeader c Etag: "f29fbc0160d276fb97a298bf5bce8ff3"
11 TxHeader c Server: WEBrick/1.3.1 (Ruby/1.9.1/2009-07-16)
11 TxHeader c Content-Length: 201518
11 TxHeader c Date: Fri, 12 Feb 2010 16:19:33 GMT
11 TxHeader c X-Varnish: 1691945259 1691945214
11 TxHeader c Age: 14
11 TxHeader c Via: 1.1 varnish
11 TxHeader c Connection: close
11 ReqEnd   c 1691945259 1265991573.541040897 1265991573.547173977
0.77009 0.55075 0.006078005

11 ReqStart c ::1 51006 1691945309
11 RxRequestc PURGE
11 RxURLc /products/sillyputty
11 RxProtocol   c HTTP/1.1
11 RxHeader c Accept: */*
11 RxHeader c User-Agent: WWW-Mechanize/1.0.0
(http://rubyforge.org/projects/mechanize/)
11 RxHeader c Connection: keep-alive
11 RxHeader c Keep-Alive: 300
11 RxHeader c Accept-Encoding: gzip,identity
11 RxHeader c Accept-Language: en-us,en;q=0.5
11 RxHeader c Host: localhost:8000
11 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
11 VCL_call c recv
11 VCL_return   c lookup
11 VCL_call c hash
11 VCL_return   c hash
11 Hit  c 1691945214
11 VCL_call c hit
11 TTL  c 1691945214 VCL 0 1265991617
 0 Debug- "VCL_error(200, Purged.)"
11 VCL_return   c error
11 VCL_call c error
11 VCL_return   c deliver
11 Length   c 322
11 VCL_call c deliver
11 VCL_return   c deliver
11 TxProtocol   c HTTP/1.1
11 TxStatus c 200
11 TxResponse   c Purged.
11 TxHeader c Server: Varnish
11 TxHeader c Retry-After: 0
11 TxHeader c Content-Type: text/html; charset=utf-8
11 TxHeader c Content-Length: 322
11 TxHeader c Date: Fri, 12 Feb 2010 16:20:16 GMT
11 TxHeader c X-Varnish: 1691945309
11 TxHeader c Age: 0
11 TxHeader c Via: 1.1 varnish
11 TxHeader c Connection: close
11 ReqEnd   c 1691945309 1265991616.884471893 1265991616.884622097
0.000173807 0.99182 0.51022

---

Now my problematic STAGING system -- first the GET with the hit, then
the purge that fails to hit.

4 ReqStart c 10.253.191.95 45944 904319331
4 RxRequestc GET
4 RxURLc /products/sillyputty
4 RxProtocol   c HTTP/1.1
4 RxHeader c Host: staging1.example.com
4 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X
10.5; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB7.0
4 RxHeader c Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
4 RxHeader c Accept-Language: en-us,en;q=0.5
4 RxHeader c Accept-Encoding: gzip,deflate
4 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
4 RxHeader c Keep-Alive: 300
4 RxHeader c Connection: keep-alive
4 RxHeader c Referer: http://staging1.example.com/
4 RxHeader c Cookie: cehq-id=10.252.66.194.1263417178788050;
__

Re: Not seeing a successful purge

2010-02-12 Thread John Norman
Thanks, Laurence. But . . .

Am I misreading the varnishlog I quoted? The Accept-Encoding headers
seem both to contain gzip:

Here's the Accept-Encoding from the browser:

4 RxHeader c Accept-Encoding: gzip,deflate

Here it is from the PURGE:

4 RxHeader c Accept-Encoding: gzip,identity

(Both quoted from my original message.)

They both contain gzip, so the hash should be the URL + gzip:

sub vcl_hash {
  set req.hash += req.url;

  if (req.http.Accept-Encoding ~ "gzip") {
set req.hash += "gzip";
  } else if (req.http.Accept-Encoding ~ "deflate") {
set req.hash += "deflate";
  }
  return (hash);
}

So, I'm still not seeing why the purge isn't hitting:

>From the browser's GET:

4 VCL_return   c hash
4 Hit  c 904319089
4 VCL_call c hit
4 VCL_return   c deliver

>From the PURGE request:

4 VCL_call c hash
4 VCL_return   c hash
4 VCL_call c miss
0 Debug- "VCL_error(404, Not in cache.)"

On Fri, Feb 12, 2010 at 5:49 AM, Laurence Rowe  wrote:
> Hi,
>
> Your PURGE request is getting a different hash than your browser
> requests because there is no Accept-Encoding header on the PURGE. (You
> see the same problem when using Vary on the response). See
> http://varnish-cache.org/wiki/Purging. You can either use <<
> purge("req.url ~ " req.url); >> in vcl_recv, or send multiple PURGE
> requests with each of the relavant Accept-Encoding values.
>
> Laurence
>
> On 11 February 2010 23:25, John Norman  wrote:
>> Hi, folks.
>>
>> I'm trying to purge with the pseudo HTTP "PURGE" method Varnish supports.
>>
>> I do seem to have a cached page, but the PURGE response suggests that
>> it's missing.
>>
>> So . . . any idea why the PURGE isn't working?
>>
>> In my VCL, my vcl_hash looks like this (I intend it to only hash on
>> the request URL and compression):
>>
>> sub vcl_hash {
>>  set req.hash += req.url;
>>
>>  if (req.http.Accept-Encoding ~ "gzip") {
>>    set req.hash += "gzip";
>>  } else if (req.http.Accept-Encoding ~ "deflate") {
>>    set req.hash += "deflate";
>>  }
>>  return (hash);
>> }
>>
>> And the checks for PURGE look like this (full VCL way below):
>>
>> sub vcl_hit {
>>  if (req.request == "PURGE") {
>>    set obj.ttl = 0s;
>>    error 200 "Purged.";
>>  }
>>  if (!obj.cacheable) {
>>    pass;
>>  }
>>
>>  deliver;
>> }
>> sub vcl_miss {
>>  if (req.request == "PURGE") {
>>    error 404 "Not in cache.";
>>  }
>> }
>>
>> And in vcl_recv:
>>
>>  if (req.request == "PURGE") {
>>    lookup;
>>  }
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Not seeing a successful purge

2010-02-11 Thread John Norman
Hi, folks.

I'm trying to purge with the pseudo HTTP "PURGE" method Varnish supports.

I do seem to have a cached page, but the PURGE response suggests that
it's missing.

So . . . any idea why the PURGE isn't working?

In my VCL, my vcl_hash looks like this (I intend it to only hash on
the request URL and compression):

sub vcl_hash {
  set req.hash += req.url;

  if (req.http.Accept-Encoding ~ "gzip") {
set req.hash += "gzip";
  } else if (req.http.Accept-Encoding ~ "deflate") {
set req.hash += "deflate";
  }
  return (hash);
}

And the checks for PURGE look like this (full VCL way below):

sub vcl_hit {
  if (req.request == "PURGE") {
set obj.ttl = 0s;
error 200 "Purged.";
  }
  if (!obj.cacheable) {
pass;
  }

  deliver;
}
sub vcl_miss {
  if (req.request == "PURGE") {
error 404 "Not in cache.";
  }
}

And in vcl_recv:

  if (req.request == "PURGE") {
lookup;
  }

Below is some output from varnishlog, first showing the return of the
cached page -- then the response for the PURGE.

The request from the browser comes through HAProxy first, then into Varnish.

The PURGE request is via Ruby Mechanize.

4 ReqStart c 10.253.191.95 60271 904319188
4 RxRequestc GET
4 RxURLc /products/sillyputty
4 RxProtocol   c HTTP/1.1
4 RxHeader c Host: staging1.example.com
4 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X
10.5; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB7.0
4 RxHeader c Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
4 RxHeader c Accept-Language: en-us,en;q=0.5
4 RxHeader c Accept-Encoding: gzip,deflate
4 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
4 RxHeader c Keep-Alive: 300
4 RxHeader c Connection: keep-alive
4 RxHeader c Cookie: cehq-id=10.252.66.194.1263417178788050;
__utma=240927894.185175319.1263417179.1265907679.1265923273.61;
__utmz=240927894.1263591912.11.2.utmcsr=localhost:3000|utmccn=(referral)|utmcmd=referral|utmcct=/;
__utma=229000926.194920698.1263480064.126591
4 RxHeader c X-Forwarded-For: 75.150.106.113
4 VCL_call c recv
4 VCL_return   c lookup
4 VCL_call c hash
4 VCL_return   c hash
4 Hit  c 904319089
4 VCL_call c hit
4 VCL_return   c deliver
4 Length   c 13825
4 VCL_call c deliver
4 VCL_return   c deliver
4 TxProtocol   c HTTP/1.1
4 TxStatus c 200
4 TxResponse   c OK
4 TxHeader c Server: Apache/2.2.12 (Ubuntu)
4 TxHeader c X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.9
4 TxHeader c Cache-Control: max-age=7680, public
4 TxHeader c X-Runtime: 56860
4 TxHeader c ETag: "1de74468f783ce10a7af58decf0b5871"
4 TxHeader c Status: 200
4 TxHeader c Vary: Accept-Encoding,User-Agent
4 TxHeader c Content-Encoding: gzip
4 TxHeader c Content-Type: text/html; charset=utf-8
4 TxHeader c Content-Length: 13825
4 TxHeader c Date: Thu, 11 Feb 2010 22:09:33 GMT
4 TxHeader c X-Varnish: 904319188 904319089
4 TxHeader c Age: 852
4 TxHeader c Via: 1.1 varnish
4 TxHeader c Connection: keep-alive
4 ReqEnd   c 904319188 1265926173.262102842 1265926173.262276649
0.60797 0.000109196 0.64611

4 ReqStart c 10.253.191.95 60313 904319225
4 RxRequestc PURGE
4 RxURLc /products/sillyputty
4 RxProtocol   c HTTP/1.1
4 RxHeader c Accept: */*
4 RxHeader c User-Agent: WWW-Mechanize/1.0.0
(http://rubyforge.org/projects/mechanize/)
4 RxHeader c Connection: keep-alive
4 RxHeader c Keep-Alive: 300
4 RxHeader c Accept-Encoding: gzip,identity
4 RxHeader c Accept-Language: en-us,en;q=0.5
4 RxHeader c Host: staging1:8000
4 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
4 VCL_call c recv
4 VCL_return   c lookup
4 VCL_call c hash
4 VCL_return   c hash
4 VCL_call c miss
0 Debug- "VCL_error(404, Not in cache.)"
4 VCL_return   c error
4 VCL_call c error
4 VCL_return   c deliver
4 Length   c 340
4 VCL_call c deliver
4 VCL_return   c deliver
4 TxProtocol   c HTTP/1.1
4 TxStatus c 404
4 TxResponse   c Not in cache.
4 TxHeader c Server: Varnish
4 TxHeader c Retry-After: 0
4 TxHeader c Content-Type: text/html; charset=utf-8
4 TxHeader c Content-Length: 340
4 TxHeader c Date: Thu, 11 Feb 2010 22:10:02 GMT
4 TxHeader c X-Varnish: 904319225
4 TxHeader c Age: 0
4 TxHeader c Via: 1.1 varnish
4 TxHeader c Connection: close
4 ReqEnd   c 904319225 1265926202.794907331 1265926202.795089960
0.54121 0.000142574 0.40054

For the sake of completeness, here's the full VCL (the reason for the
director with one server is 'cos on product we round-robin against 3
backends):

backend reviews0 {
  .host = "10.253.191.95";
  .port = "7000";

#.probe = {
#  .url = "/heartbeat";
#  .timeout = 10.0 s;
#  .interval = 120 s;
#  .window = 10;
#  .threshold = 7;
#}

}


director reviews round-robin {

{ .backend = reviews0; }

}

sub vcl_hash {
  s

Re: Bug fix 601 - Will be in the next release?

2010-01-27 Thread John Norman
Yes, but the workarounds seem to replace x-forwarded-for with another
header, e.g., X-Real-Forwarded-For
(https://wiki.fourkitchens.com/display/PF/Workaround+for+Varnish+X-Forwarded-For+bug).

And the FAQ entry
(http://varnish-cache.org/wiki/FAQ#HowcanIlogtheclientIPaddressonthebackend)
seems to assume that there is nothing fronting Varnish that has
already added an x-forwarded-for header (which is what the bug is
about).

In any case, I assume that the fix for bug 601 will go into the next
release . . .

On Wed, Jan 27, 2010 at 5:42 PM, pablort  wrote:
> Have you tried google "varnish x-forwarded-for" ?
>
> There an FAQ entry addressing that (somewhat).
>
> []'s
>
> On Wed, Jan 27, 2010 at 3:47 PM, John Norman  wrote:
>>
>> Folks,
>>
>> Will the fix for http://varnish-cache.org/ticket/601 (cf.
>> http://varnish-cache.org/ticket/540) be in the next release?
>>
>> My Varnish gets a x-forwarded-for from another server: What will the
>> VCL be to use that instead of whatever Varnish tries to append?
>>
>> John
>> ___
>> varnish-misc mailing list
>> varnish-misc@projects.linpro.no
>> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Bug fix 601 - Will be in the next release?

2010-01-27 Thread John Norman
Folks,

Will the fix for http://varnish-cache.org/ticket/601 (cf.
http://varnish-cache.org/ticket/540) be in the next release?

My Varnish gets a x-forwarded-for from another server: What will the
VCL be to use that instead of whatever Varnish tries to append?

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Restoring cookies on a miss

2010-01-25 Thread John Norman
Folks,

I've been trying to implement a technique posted here to restore
cookies on a cache miss.

The original question is here:

http://projects.linpro.no/pipermail/varnish-misc/2010-January/003505.html

and an interesting answer is here:

http://projects.linpro.no/pipermail/varnish-misc/2010-January/003506.html

Below is my .vcl file.

And again, here's the use case:

We have certain pages that should never be cached. The user comes in
with cookies set: "session=foo" or some such.

We strip the cookie and do a lookup.

If there's a hit, return what is in the cache.

If there's a miss, we'd like to fetch with the cookie.

Then, in fetch, pass for pages set for no-cache, and deliver for those
that are public.

It can be assumed that these pages are never hit first by non-cookied users.

But -- I am seeing a lot of requests that have no cookies on the backend.

John

Here's the VCL:

backend reviews0 {
  .host = "127.0.0.1";
  .port = "7000";
}

backend reviews1 {
  .host = "127.0.0.1";
  .port = "7001";
}

director reviews round-robin {
{ .backend = reviews0; }
{ .backend = reviews1; }
}

sub vcl_recv {

unset req.http.user-agent;

set req.backend = reviews;

  if (req.request != "GET" && req.request != "HEAD") {
pass;
  }

  set req.http.OLD-Cookie = req.http.Cookie;
  unset req.http.cookie;

lookup;
}

sub vcl_miss {
  if (req.http.OLD-Cookie) {
set bereq.http.Cookie = req.http.OLD-Cookie;
unset req.http.OLD-Cookie;
  }
  fetch;
}

sub vcl_fetch {

  unset obj.http.user-agent;

if (obj.http.Pragma ~ "no-cache" ||
  obj.http.Cache-Control ~ "no-cache" ||
  obj.http.Cache-Control ~ "private") {
pass;
}

if (obj.http.Cache-Control ~ "public") {
unset obj.http.Set-Cookie;
deliver;
}
pass;
}
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


use new VCL -- drops cache?

2010-01-21 Thread John Norman
When you switch to a new VCL on the fly (vcl.use config), is the cache dumped?

(I think the answer is "no," but I just want to make sure.)

(Munin shows a drastic sudden reduction in memory usage -- but I don't
think Varnish restarted.)

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


In management port: vcl.discard

2010-01-19 Thread John Norman
Folks,

I've been loading new VCL files with a timestamp on the name (e.g.,
cfg100119151756).

vcl.discard is great if you know the name.

But it could be very useful to have a command such as "vcl.purge" to
get rid of all configs except for the active one.

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Health check -- just connect, or full response?

2010-01-19 Thread John Norman
Folks,

For the health check (or, ahem, "backend probe," as the docs has it --
ouch!), does "health" constitute ability to connect?

Or does it check for a 200?

Or get an entire page and verify that it's the right number of bytes . . . ?

Or . . . ?

In short, what constitutes a successful probe?

I'm using .url not .request

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-17 Thread John Norman
Hey, folks, I just want to thank for this great thread -- I think it
would be well worth breaking it up into Q/A for the FAQ.

We're still a bit undecided as to how we're going to configure our
systems, but we feel like we have options now.

On Sun, Jan 17, 2010 at 4:10 PM, Ross Brown  wrote:
> I hadn't used varnishadm before. Looks useful.
>
> Thanks!
>
> -Original Message-
> From: p...@critter.freebsd.dk [mailto:p...@critter.freebsd.dk] On Behalf Of 
> Poul-Henning Kamp
> Sent: Monday, 18 January 2010 9:38 a.m.
> To: Ross Brown
> Cc: varnish-misc@projects.linpro.no
> Subject: Re: Strategies for splitting load across varnish instances? And 
> avoiding single-point-of-failure?
>
> In message <1ff67d7369ed1a45832180c7c1109bca13e23e7...@tmmail0.trademe.local>,
> Ross Brown writes:
>>> So it is possible to start your Varnish with one VCL program, and have
>>> a small script change to another one some minutes later.
>>
>>What would this small script look like?=20
>
>        sleep 600
>        varnishadm vcl.load real_thing "/usr/local/etc/varnish/real.vcl"
>        varnishadm vcl.use real_thing
>
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> p...@freebsd.org         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Another question - after clearing cache (or restart), avoiding killing the backend?

2010-01-15 Thread John Norman
Thanks for the various answers!

Since the cache is cleared after a restart, how do people avoid
slamming their backends as the cache is refilled?

(I know once answer is: Don't restart. But let's say that Varnish
crashes, or there are other issues.)

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

2010-01-15 Thread John Norman
Folks,

A couple more questions:

(1) Are they any good strategies for splitting load across Varnish
front-ends? Or is the common practice to have just one Varnish server?

(2) How do people avoid single-point-of-failure for Varnish? Do people
run Varnish on two servers, amassing similar local caches, but put
something in front of the two Varnishes? Or round-robin-DNS?

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strange different behavior

2010-01-15 Thread John Norman
OK.

But if your application backend really doesn't do anything different
for different user agents, then one should probably remove the
user-agent?

On Fri, Jan 15, 2010 at 7:52 AM, Poul-Henning Kamp  wrote:
> In message , John
> Norman writes:
>>Sorry to be so obtuse:
>>
>>So with the default setup, there will be a cached copy of a page for
>>every single user agent?
>
> Yes, unless you do something about the "Vary: User-Agent" header
> returned from the backend.
>
>>If so, does anyone have a good number of user agents that should be
>>supported for calculating the size of the cache? E.g., if I've guessed
>>64M for my pages, and I imagine that there are 10 user agents (I know
>>it's more) then I'd want to multiply that 64M x 10.
>
> You really need to find out what bit of user-agent your backend
> cares about.  We are talking a multiplication factor of 100-1000 here.
>
> Poul-Henning
>
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> p...@freebsd.org         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Strange different behavior

2010-01-15 Thread John Norman
Sorry to be so obtuse:

So with the default setup, there will be a cached copy of a page for
every single user agent?

If so, does anyone have a good number of user agents that should be
supported for calculating the size of the cache? E.g., if I've guessed
64M for my pages, and I imagine that there are 10 user agents (I know
it's more) then I'd want to multiply that 64M x 10.

On Fri, Jan 15, 2010 at 4:19 AM, Poul-Henning Kamp  wrote:
> In message <20100114215025.gb9...@kjeks.kristian.int>, Kristian Lyngstol 
> writes
> :
>
>>Vary on User-Agent is generally bad, and you should Just Fix That [tm].
>
> Apart from the compatibility issue, a secondary reason it is a bad
> idea, is that User-Agent is practically unique for every single PC
> in the world, so you will cache up to hundreds of copies of the pages
> for no good reason.
>
> If your site is running live on Varnish, try running:
>
>        varnishtop -i rxheader -I User-Agent
>
> and see how many different strings your clients send you...
>
> In all likelyhood, your backend looks at only one or two of the bits
> in User-Agent (MSIE or Mozilla ?) but Varnish has to look at the
> entire string, since it has no way of knowing what your backend
> looks at.
>
> One workaround, is to do what we call "User-Agent-Washing", where
> Varnish rewrites the Useragent to the handfull of different variants
> your backend really cares about, along the lines of:
>
> sub vcl_recv {
>        if (req.http.user-agent ~ "MSIE") {
>                set req.http.user-agent = "MSIE";
>        } else {
>                set req.http.user-agent = "Mozilla";
>        }
> }
>
> So that you only cache the relevant number of copies.
>
> But as Kristian says:  The best thing, is to not Vary on User-Agent
> in the first place, that's how the InterNet is supposed to work.
>
> Poul-Henning
>
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> p...@freebsd.org         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Does a Varnish restart clear the existing cache?

2010-01-14 Thread John Norman
I think the answer is "no," but . . .

Does a Varnish restart clear the existing cache?

(using the "file" storage.)
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: grace scenario - app side and varnish

2010-01-13 Thread John Norman
Thanks. An explicit statement of this in the docs would be helpful.

On Wed, Jan 13, 2010 at 6:15 AM, Tollef Fog Heen
 wrote:
> ]] John Norman
>
> | I would like to set "grace" in Varnish so that this stays in the cache
> | for some long amount of time, and then when I hit the site, I get
> | stale content (no wait), and the backend gets triggered to refresh the
> | cache.
>
> grace works by serving old content to any clients which would otherwise
> be put on a wait list.  It does not prefect or fetch asynchronously.  So
> with grace, the first client has to wait, and any subsequent ones just
> get the graced content.
>
> --
> Tollef Fog Heen
> Redpill Linpro -- Changing the game!
> t: +47 21 54 41 73
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: "trunk" in grace docs?

2010-01-12 Thread John Norman
Thanks!

On Tue, Jan 12, 2010 at 6:51 PM, Ross Brown  wrote:
> Syntax has changed from obj.grace to beresp.grace, if you are using a recent 
> build.
>
> See http://varnish.projects.linpro.no/changeset/4224
>
> Ross
>
>
> -Original Message-
> From: varnish-misc-boun...@projects.linpro.no 
> [mailto:varnish-misc-boun...@projects.linpro.no] On Behalf Of John Norman
> Sent: Wednesday, 13 January 2010 9:27 a.m.
> To: varnish-misc@projects.linpro.no
> Subject: "trunk" in grace docs?
>
> At this page: http://varnish.projects.linpro.no/wiki/VCLExampleGrace
>
> What does the comment "# or for trunk" mean?
>
> And what is the difference between setting grace on obj and on beresp?
>
> (It would be helpful were both of these questions address in the doc.)
>
> John
> ___
> varnish-misc mailing list
> varnish-misc@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


grace scenario - app side and varnish

2010-01-12 Thread John Norman
Folks,

I'm having a dickens of a time triggering "grace" mode.

Varnish version 2.0.6.

I am using a director.

On my app server, the expiration is set to, e.g.,

Cache-Control: max-age=10, public

I would like to set "grace" in Varnish so that this stays in the cache
for some long amount of time, and then when I hit the site, I get
stale content (no wait), and the backend gets triggered to refresh the
cache.

It would seem that all I should have to do is set req.grace = 1m; at
the top of vcl_recv (or for sections that go to "lookup"), and in
vcl_fetch, set obj.grace = 1m.

Then in the period between +10 seconds and 1m, I should see that
behavior -- getting stale data, but seeing Varnish go to my backend
app server to get new content?

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: "trunk" in grace docs?

2010-01-12 Thread John Norman
Trunk of the the repo?

On Tue, Jan 12, 2010 at 3:26 PM, John Norman  wrote:
> At this page: http://varnish.projects.linpro.no/wiki/VCLExampleGrace
>
> What does the comment "# or for trunk" mean?
>
> And what is the difference between setting grace on obj and on beresp?
>
> (It would be helpful were both of these questions address in the doc.)
>
> John
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


"trunk" in grace docs?

2010-01-12 Thread John Norman
At this page: http://varnish.projects.linpro.no/wiki/VCLExampleGrace

What does the comment "# or for trunk" mean?

And what is the difference between setting grace on obj and on beresp?

(It would be helpful were both of these questions address in the doc.)

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Purging multiple requests

2010-01-11 Thread John Norman
Scenario:

-- We would prefer not to leverage checking a lot of paths.

-- Many pages are cached for GET's.

-- In vcl_recv, we want to remove cookies and check the cache:

if (req.request == "GET") {
 unset req.http.cookie;
 unset req.http.Authorization;
 lookup;
 }

BUT: Suppose the lookup results in a MISS:

Now we would like to "pass" but WITH the cookies. I.e., check the
cache without cookies; but if there's a miss, reattach them and make
the request.

--

Let me put this another way, describing what's happening in our code:

There are many routine server responses for which we have set caching
headers. All is beautiful.

But we have some, primarily of the form

/something/edit

where we would like to use the cookie to bring data into a form.

To be sure, we could check the file paths . . .

if (req.request == "GET" && req.url !~ "/edit$") {
 unset req.http.cookie;
 unset req.http.Authorization;
 lookup;
 }

but we were wondering if there is a pattern to save the cookies and
then reattach them later (in vcl_miss??), and thus get the "pass" to
the backend with the cookies back on the request.

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Re: Question from Dec. 2009: "still using cache when fetching content"

2010-01-08 Thread John Norman
I think Chris Davies has straightened me out, and that the scenario I
describe *is* covered by grace -- that the first hit would also get stale
(as well as others in the grace period), which is exactly what I want.

On Fri, Jan 8, 2010 at 4:20 PM, John Norman  wrote:

> Hi. I've just subscribed, and have been reading through the e-mail archives
> for varnish-misc.
>
> In Dec. 2009, Jean-Christophe Petit asked (with subject "still using cache
> when fetching content"):
>
> -
> Is it possible to make Varnish sending the cache content at the same
> time it is fetching from the backend ?
> It will be more efficient for slow dynamic content ;)
> For example, I have a php page taking up to 5sec to run. If Varnish was
> able to get it while still sending the old cache page, it would be
> really great.
> No more unlucky visitor hitting it to update the cache..
> -
>
> There were a couple of replies, but I just wanted to flesh out the "use
> case":
>
> I think the scenario would be:
>
> (1) There is a page that takes a very long time to render (say, 90 seconds)
> (2) It is cached, perhaps to expire after an hour
> (3) In normal usage, everyone gets the cached page and is very happy
> (4) An administrator would like to refresh that page in the cache BEFORE
> the hour is up; but give any user the earlier cached page
> (6) The admin would like a mechanism: "Please refresh now with this
> request, but until the request is finished, serve whatever is in the cache."
>
> This is different from grace, 'cos no one (except the admin) is incurring
> the wait; and also the admin is asking for a refresh before the cached page
> has expired.
>
> Is this a feature that can be simulated in Varnish? Or a feature that might
> be added at some time?
>
> What we're concerned about is Google's timing of page download. If we set a
> cache period to, say, an hour, and Google incurs the "first hit" after cache
> expiry, then it (Google) has to wait for the finished page. In grace mode,
> requests after the Google hit would get the earlier cached page; but, still,
> Google has measured the page as taking, say, 60 seconds to download. In the
> model above, Google would get the cached page; it would just be stale.
>
> Thoughts?
>
> John
>
>
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc


Question from Dec. 2009: "still using cache when fetching content"

2010-01-08 Thread John Norman
Hi. I've just subscribed, and have been reading through the e-mail archives
for varnish-misc.

In Dec. 2009, Jean-Christophe Petit asked (with subject "still using cache
when fetching content"):

-
Is it possible to make Varnish sending the cache content at the same
time it is fetching from the backend ?
It will be more efficient for slow dynamic content ;)
For example, I have a php page taking up to 5sec to run. If Varnish was
able to get it while still sending the old cache page, it would be
really great.
No more unlucky visitor hitting it to update the cache..
-

There were a couple of replies, but I just wanted to flesh out the "use
case":

I think the scenario would be:

(1) There is a page that takes a very long time to render (say, 90 seconds)
(2) It is cached, perhaps to expire after an hour
(3) In normal usage, everyone gets the cached page and is very happy
(4) An administrator would like to refresh that page in the cache BEFORE the
hour is up; but give any user the earlier cached page
(6) The admin would like a mechanism: "Please refresh now with this request,
but until the request is finished, serve whatever is in the cache."

This is different from grace, 'cos no one (except the admin) is incurring
the wait; and also the admin is asking for a refresh before the cached page
has expired.

Is this a feature that can be simulated in Varnish? Or a feature that might
be added at some time?

What we're concerned about is Google's timing of page download. If we set a
cache period to, say, an hour, and Google incurs the "first hit" after cache
expiry, then it (Google) has to wait for the finished page. In grace mode,
requests after the Google hit would get the earlier cached page; but, still,
Google has measured the page as taking, say, 60 seconds to download. In the
model above, Google would get the cached page; it would just be stale.

Thoughts?

John
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-misc