For any cache, not just Varnish, the Accept-Encoding header used for the purge
(or a regular cache hit) must match the request header /exactly/. If you use
anything other than exactly "Accept-Encoding: gzip,deflate" your purge will
miss. So this is the expected behavior, AFAIK.
Any other head
If your default_ttl is not 0, then this may be the expected behavior. I'm not
sure if Varnish should really ever cache >=500 responses?
But in VCL you could do something like:
sub vcl_fetch {
if ( obj.status >= 500 ) {
set obj.ttl = 0s;
set obj.cacheable
here's a way to tune this (swappiness already at 1).
Thanks,
--
Ken
On Jan 29, 2010, at 11:16 AM, Ken Brownfield wrote:
> On Jan 29, 2010, at 3:54 AM, Tollef Fog Heen wrote:
>> It should be. You'll lose the last storage silo (since that's not
>> closed yet), but old
On Jan 29, 2010, at 3:54 AM, Tollef Fog Heen wrote:
> It should be. You'll lose the last storage silo (since that's not
> closed yet), but older objects should be available.
This might be the source of the confusion. How often are silos closed? My
testing was simply "hit the cache for a single
Right, -spersistent. Child restarts are persistent, parent process stop/start
isn't.
Maybe there's a graceful, undocumented method of stopping the parent that I'm
not aware of?
--
kb
On Jan 27, 2010, at 1:26 AM, Tollef Fog Heen wrote:
> ]] Ken Brownfield
>
> | I
If you sometimes see the proper cookies being passed to the back-end, I would
think this would be a client problem? The VCL logic in question should either
always work or never work.
I just did a test of this, and I see the proper header hitting the back-end.
Maybe you're seeing unexpectedly
I'd love to test persistent under production load, but right now it's not
persistent. :-( (Storage doesn't persist through a parent restart)
--
Ken
On Jan 25, 2010, at 1:26 AM, Tollef Fog Heen wrote:
> ]] pablort
>
> | And how about 2.1 ? Any release date on the horizon ? :D
>
> Persistent
On Jan 18, 2010, at 4:03 PM, Michael S. Fischer wrote:
>> Does [Apache] perform "well" for static files in the absence of any other
>> function? Yes. Would I choose it for anything other than an application
>> server? No. There are much better solutions out there, and the proof is in
>> the
> Let me clear, in case I have not been clear enough already:
>
> I am not talking about the edge cases of those low-concurrency, high-latency,
> scripted-language webservers that are becoming tied to web application
> frameworks like Rails and Django and that are the best fit for front-end
> c
On Jan 18, 2010, at 3:16 PM, Michael S. Fischer wrote:
> On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
>
>> In the real world, sites run their applications through web servers, and
>> this fact does (and should) guide the decision on the base web server to
>> use,
> I have a hard time believing that any difference in the total response time
> of a cached static object between Varnish and a general-purpose webserver
> will be statistically significant, especially considering typical Internet
> network latency. If there's any difference it should be well u
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
> On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne wrote:
>
> Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
> memory and the backends have a much easier life than before Varnish.
> We are about to upgrade RAM on the Varnish
On Jan 15, 2010, at 3:39 PM, pub crawler wrote:
> Have we considered adding pooling functionality to Varnish much like
> what they have in memcached?
>
> Run multiple Varnish(es) and load distributed amongst the identified
> Varnish server pool So an element in Varnish gets hashed and the
>
Lots of good suggestions; I would look to LVS and/or haproxy for going on the
cheap; otherwise a NetScaler or F5 would do the trick.
With multiple caches, there are three ways I see to handle it:
1) Duplicate cached data on all Varnish instances.
This is a simple, stateless configuration, but i
Something like:
sub vcl_recv {
if ( req.request == "GET" ) {
set req.http.OLD-Cookie = req.http.Cookie;
unset req.http.Cookie;
set req.http.OLD-Authorization = req.http.Authorization;
unset req.http.Authorization;
}
}
Have the application emit a cache pragma or expires to make the BANNED page
non-cacheable. Alternatively, you could have the app emit an Expires header to
cause the browser to cache the result, but also add a header that would trigger
Varnish to /not/ cache it.
Looking at your previous posts,
As a workaround, on Linux (at least) you could emulate this with an iptables
SNAT rule, but it does have less performance potential.
--
Ken
On Jan 2, 2010, at 12:23 AM, Mike Schiessl wrote:
> >From the shopping list:
> > Does this make sense ? Routing is based on the routes, and the outgoin
Those all seem very useful to me, but I think the lowest-hanging performance
fruit right now is simultaneous connections and the threading model (including
the discussions about stacksize and memory usage, etc).
Modeling Varnish's behavior with certain ranges of simultaneous worker and
backend
I believe you should upgrade to 2.0.5 (or scan Varnish ticket #529 for a patch)
which retains this and other headers in a 304 response.
--
Ken
On Nov 18, 2009, at 5:05 AM, Lars Jørgensen wrote:
> Hi,
>
> Another one that I'm trying to work out at the moment. I have enabled
> mod_expires in Ap
varnishd -f /path/to/your/config.vcl -C
This will compile your VCL into C and emit it to stdout. It will show
prototypes for all of the VRT interface accessible from VCL, the structs
representing your backend(s) and director(s), and the config itself. The wiki
is a little misleading (and -C i
Note that the linked article is from 2004. The kernels that RedHat uses are a
bag of hurt, not to mention ancient.
If you can upgrade to RHELl5 that may be the easiest fix (I can only assume
that the mmap limitation has been removed). Perhaps RedHat has newer RHELl4
kernels in a bleeding edge
Hopefully your upper management allows you to install contemporary
software and distributions. Otherwise memory leaks and x86_64 would
be the least of your concerns. Honestly, you're waiting for Varnish
to stabilize and you're running v1?
My data point: 5 months and over 100PB of transfer
48 PM, Henry Paulissen wrote:
> Our load balancer transforms all connections from keep-alive to close.
> So keep-alive connections aren’t the issue here.
>
> Also, if I limit the thread count I still see the same behavior.
>
> -Oorspronkelijk bericht-
> Van: Ken Brownfi
I've started playing with persistence a bit in trunk, and it seems
like the storage is persistent across restarts of the child, but /not/
the parent.
For a small working set, having any persistence at all is somewhat
optional. For large working sets, you really want persistence across
par
On Sep 16, 2009, at 10:03 AM, Kristian Lyngstol wrote:
> On Wed, Sep 16, 2009 at 09:54:25AM -0700, Ken Brownfield wrote:
>> I'm a bit loathe to reenable this to get a full stacktrace and gdb
>> output, but if there's really nothing wrong with this I might
>> co
Ah, I stand corrected. But I was definitely having random crashes
when I enabled the vcl_fetch() section below:
sub vcl_recv {
...
set req.http.Unmodified-Host = req.http.Host;
set req.http.Unmodified-URL = req.url;
...
}
sub vcl_fetch {
...
set o
My weapon of choice there would be oprofile, run something like this
under high load and/or when you have a lot of threads active:
opcontrol --init
# You'll want a debug kernel
# For example, the Ubuntu package is linux-image-debug-server
opcontrol --setup --vmlinux=/boot/vmlinux-2.6.24-server
o
The bottleneck you would typically see is interrupts from network
traffic (especially if you're tracking connections), bandwidth limits,
slow backends, too many keepalive sessions, and pthread stack size.
Some of those can exacerbate the thread count and memory usage on an
already stodgy p
Hey Karl. :-)
The implementation of purge in Varnish is really a queue of refcounted
ban objects. Every image hit is compared to the ban list to see if
the object in cache should be reloaded from a backend.
If you have purge_dups off, /every/ request to Varnish will regex
against every s
I never found a way to see how much stack is /used/ vs. /allocated/ in
a process or thread, so it would be great if someone had ideas?
I could only experiment in production, first moving us to 1MB, then
256KB. I've yet to see any issues at 256KB, but we can reach the
upper limits of thread-
ne of these spikes, does it instantly
>> disappear? I've seen this happen (though only spiking to about
>> 12), and
>> this is when Varnish has munched through far more memory than we've
>> allocated it. This problem is one I've been looking into with Ken
>
Quite a coincidence... We've moved some traffic /off/ of Akamai, but
only today did we start stacking.
I don't expect there to be any problems *EXCEPT* in the 304 response
case -- currently Varnish strips the Expires and Cache-Control (among
other) headers from 304 Not Modified responses, wh
See the FAQ:
http://varnish.projects.linpro.no/wiki/FAQ#IhaveasitewithmanyhostnameshowdoIkeepthemfrommultiplyingthecache
If your backends need to see the original hostname, you can unrewrite
it in vcl_miss().
--
Ken
On Jul 11, 2009, at 3:54 AM, Hip Hydra wrote:
> Hi, I'm running a network of
On Jul 14, 2009, at 3:05 AM, Kristian Lyngstol wrote:
> On Tue, Jul 14, 2009 at 11:46:58AM +0200, Lazy wrote:
>> the site is usually not so busy, but it has sometimes spikes of
>> static
>> traffic (about 50Mbps) that's why i upped the thread limit, 3000 was
>> to low
>
> I seriously doubt 3k was
es it, FWIW.
--
Ken.
On Jun 30, 2009, at 5:11 PM, Tollef Fog Heen wrote:
>
> ]] "Poul-Henning Kamp"
>
> | In message <5c056ae2-7207-42f8-9e4b-0f541dc4b...@slide.com>, Ken
> Brownfield wri
> | tes:
> |
> | >Would a stack overflow take out the whole ch
Isn't VRT_SetHdr() what you're looking for? Mind its semantics, though.
--
Ken.
On Jul 6, 2009, at 7:26 AM, Laurence Rowe wrote:
> Hi,
>
> Thought my C is rather rusty by now, I'd like to make the mod_auth_tkt
> [1] signed cookie authentication / authorisation system work with
> Varnish. The id
On Jun 18, 2009, at 11:20 PM, Ken Brownfield wrote:
> [...]
> The attached patch creates a backend flag to change the initial
> health of backends upon varnishd startup:
>
> backend foo {
> .initial_health = 1;
> }
> [...]
> TODO: I should probably add "-p i
lib/jemalloc/malloc.c:
#define CHUNK_2POW_DEFAULT 20
Thanks!
--
Ken.
On Jun 19, 2009, at 7:15 AM, Tollef Fog Heen wrote:
> ]] Ken Brownfield
>
> | When looking at /proc/map info for varnish threads, I'm seeing the
> | following allocations in numbers that
When looking at /proc/map info for varnish threads, I'm seeing the
following allocations in numbers that essentially match the child count:
40111000 8192K rw---[ anon ]
And this at almost double the child count:
7f4d5790 1024K rw---[ anon ]
For example, for 64 work
[Apologies if this belongs on varnish-dev; this list seemed much more
active.]
This patch came about from observations in tickets #512 and #518.
The attached patch creates a backend flag to change the initial health
of backends upon varnishd startup:
backend foo {
.initial_health =
40 matches
Mail list logo