Survey; how do you use Varnish?
1) 8 (All servers are serving from mem, big machines. 6 as frontend in cluster and load balanced, 1 as between frontend varnish and backend application server and 1 as backup) 2) ~250MB/s (through varnish). 2.5m visitors a day, 27m pageviews a day (for the whole website, some content doesn't go through varnish). 3) Online Adult Entertainment (<-- porn). 4) nope 5.1) WCCP support 5.2) My feature request: http://projects.linpro.no/pipermail/varnish-misc/2010-January/003636.html 5.3) Gzip support 5.4) VCL cookie handing 5.5) Synthetic content (I use error now, it doesn't bother me). Henry -Original Message- From: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] On Behalf Of Per Andreas Buer Sent: vrijdag 29 januari 2010 15:48 To: varnish-misc@projects.linpro.no Subject: Survey; how do you use Varnish? Hi list. I'm working for Redpill Linpro, you might have heard of us - we're the main sponsor of Varnish development. We're a bit curious about how Varnish is used, what features are used and what is missing. What does a typical installation look like? The information you would choose to reveal to me would be aggregated and deleted and I promise you I won't use it for any sales activities or harass you in any way. We will pubish the result on this list if the feedback is significant. If you have the time and would like to help us please take some time and answer the questions in a direct mail to me. Thanks. 1) How many servers do you have running Varnish? 2) What sort of total load are you having? Mbit/s or hits per second are preferred metrics. 3) What sort of site is it? *) Online media *) Cooperate website (ibm.com or similar) *) Retail *) Educational *) Social website 4) Do you use ESI? 5) What features are you missing from Varnish. Max three features, prioritized. Please refer to http://varnish-cache.org/wiki/PostTwoShoppingList for features. -- Per Andreas Buer Redpill Linpro Group - Changing the Game Mobile +47 958 39 117 / Phone: +47 21 54 41 21 ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Feature REQ: Match header value against acl
Nice When will this be in trunk? Regards, @Paul, sorry... forgot to include varnish-misc -Oorspronkelijk bericht- Van: p...@critter.freebsd.dk [mailto:p...@critter.freebsd.dk] Namens Poul-Henning Kamp Verzonden: dinsdag 19 januari 2010 18:24 Aan: Henry Paulissen CC: varnish-misc@projects.linpro.no Onderwerp: Re: Feature REQ: Match header value against acl In message <002501ca9918$aa519aa0$fef4cf...@paulissen@qbell.nl>, "Henry Pauliss en" writes: >What I tried to do is as follow: > >if ( !req.http.X-Forwarded-For ~ purge ) { I have decided what the syntax for this will be, but I have still not implemented it. In general all type conversions, except to string, will be explicit and provide a default, so the above would become: if (!IP(req.http.X-Forwarded-For, 127.0.0.2) ~ purge) { ... If the X-F-F header is not there, or does not contain an IP#, 127.0.0.2 will be used instead. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Feature REQ: Match header value against acl
I noticed it is impossible to match a header value against a acl. What I tried to do is as follow: if ( !req.http.X-Forwarded-For ~ purge ) { remove req.http.Cache-Control; } This is to reduce the number of forced refreshes due to bots. And normally you would use client.ip (what works with acl's), but I have a load balancer in front of varnish. So all client ip addresses are in the X-Forwarded-For header. A dirty quick fix for now is to use regex, but this gives a lot of extra code (as I have to match against serval ip's). Current version: varnish-trunk SVN Regards, Henry ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: feature request cache refresh
As far as I know, varnish does this by default? To expire content you have to serve proper expire and last-modified headers. Some (dynamic) applications sets inproper or even non of those headers at all. === @Martin Boer (DTCH) Neem ff contact met mij op via email. Ik heb redelijk wat ervarting opgeboewd met varnish en kan je mogelijk van dienst zijn. === Reagrds, Henry -Original Message- From: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] On Behalf Of Rob S Sent: dinsdag 19 januari 2010 9:23 To: Martin Boer Cc: Varnish misc Subject: Re: feature request cache refresh Martin Boer wrote: > I would like to see the following feature in varnish; > during the grace period varnish will serve requests from the cache but > simultaniously does a backend request and stores the new object. > This would also be of interest to us. I'm not sure if it's best to have a parameter to vary the behaviour of 'grace', or to have an additional parameter for "max age of stale content to serve". > If anyone has a workable workaround to achieve the same results I'm very > interested. > Anyone? Rob ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Slow connections
True and false. When you dont define to close the connection it will do keep-alives. The problem with this is that only the first header of the stream will be checked against the acl's. If you use haproxy only to load balance between http servers and not doing routing based on url's (e.g. send \.(gif|jpg|png|jpeg) to static server and all else to processing cluster), you may use keep-alives. Henry Van: Michael Fischer [mailto:mich...@dynamine.net] Verzonden: woensdag 23 december 2009 1:12 Aan: Henry Paulissen CC: Joe Williams; varnish-misc@projects.linpro.no Onderwerp: Re: Slow connections haproxy has never supported keep-alive HTTP connections, to my knowledge. --Michael On Tue, Dec 22, 2009 at 3:41 PM, Henry Paulissen wrote: Next one. Did you tune the tcp fin timeout? (on both servers) Linux will standard holds all connection open till it hits the fin timeout length (tcp_fin and tcp_fin2). We decreased it to 3. HAProxy support: Do you forced a http connection close in haproxy? If all connections are in keep-alive your queue will fill up real quick. Henry -Oorspronkelijk bericht- Van: Joe Williams [mailto:j...@joetify.com] Verzonden: woensdag 23 december 2009 0:23 Aan: Henry Paulissen CC: varnish-misc@projects.linpro.no Onderwerp: Re: Slow connections Thanks Henry, nf_conntrack_max is set high on both machines. I've had the full table issue before :P On 12/22/09 2:58 PM, Henry Paulissen wrote: > Have a look to the conntrack setting in the kernel (sysctl) on both sides. > It could be that your conntrack is full (connectrack only exists if you use > iptables with netfilter_conntrack). > > Regards, > Henry > > -Oorspronkelijk bericht- > Van: varnish-misc-boun...@projects.linpro.no > [mailto:varnish-misc-boun...@projects.linpro.no] Namens Joe Williams > Verzonden: dinsdag 22 december 2009 18:12 > Aan: varnish-misc@projects.linpro.no > Onderwerp: Slow connections > > > I am seeing a good amount (1/100) of connections to varnish (from > haproxy) taking 3 seconds. My first thought was the connection backlog > but somaxconn and listen_depth are both set higher than the number of > connections. Anyone have any suggestions on how to track down what is > causing this or settings I can use to try to aleviate it? > > Thanks. > > -Joe > > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Slow connections
Next one. Did you tune the tcp fin timeout? (on both servers) Linux will standard holds all connection open till it hits the fin timeout length (tcp_fin and tcp_fin2). We decreased it to 3. HAProxy support: Do you forced a http connection close in haproxy? If all connections are in keep-alive your queue will fill up real quick. Henry -Oorspronkelijk bericht- Van: Joe Williams [mailto:j...@joetify.com] Verzonden: woensdag 23 december 2009 0:23 Aan: Henry Paulissen CC: varnish-misc@projects.linpro.no Onderwerp: Re: Slow connections Thanks Henry, nf_conntrack_max is set high on both machines. I've had the full table issue before :P On 12/22/09 2:58 PM, Henry Paulissen wrote: > Have a look to the conntrack setting in the kernel (sysctl) on both sides. > It could be that your conntrack is full (connectrack only exists if you use > iptables with netfilter_conntrack). > > Regards, > Henry > > -Oorspronkelijk bericht- > Van: varnish-misc-boun...@projects.linpro.no > [mailto:varnish-misc-boun...@projects.linpro.no] Namens Joe Williams > Verzonden: dinsdag 22 december 2009 18:12 > Aan: varnish-misc@projects.linpro.no > Onderwerp: Slow connections > > > I am seeing a good amount (1/100) of connections to varnish (from > haproxy) taking 3 seconds. My first thought was the connection backlog > but somaxconn and listen_depth are both set higher than the number of > connections. Anyone have any suggestions on how to track down what is > causing this or settings I can use to try to aleviate it? > > Thanks. > > -Joe > > -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Slow connections
Have a look to the conntrack setting in the kernel (sysctl) on both sides. It could be that your conntrack is full (connectrack only exists if you use iptables with netfilter_conntrack). Regards, Henry -Oorspronkelijk bericht- Van: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] Namens Joe Williams Verzonden: dinsdag 22 december 2009 18:12 Aan: varnish-misc@projects.linpro.no Onderwerp: Slow connections I am seeing a good amount (1/100) of connections to varnish (from haproxy) taking 3 seconds. My first thought was the connection backlog but somaxconn and listen_depth are both set higher than the number of connections. Anyone have any suggestions on how to track down what is causing this or settings I can use to try to aleviate it? Thanks. -Joe -- Name: Joseph A. Williams Email: j...@joetify.com Blog: http://www.joeandmotorboat.com/ ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
req.hash and fetch
Hey all, One of my website has more only the request url who are unique. Content inside the website depends on the url you request (.com|.co.uk|etc) and the browser language. So at first request we set a cookie with the language preferences (so we don't have to check it every time). Inside varnish I made the following: @vcl_recv: set req.http.X-match = regsub(req.http.Cookie, "^.*(langpref=[a-z]+_[a-z]+).*$", "\1"); @vcl_hash: set req.hash += req.url; set req.hash += req.http.X-match; Does varnish need more settings (in vcl_fetch?) to store the parsed backend request with the proper hash key? Reagards. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Varnish virtual memory usage
People arent great sysadmin in 1 day. Tell us more about your system (specs, linux distro, vcl config, startup command, linux (sysctl?) tuning). Maybe it can help anybody/me. Regards. Van: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] Namens Ken Brownfield Verzonden: donderdag 5 november 2009 22:35 Aan: cripy CC: varnish-misc@projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Hopefully your upper management allows you to install contemporary software and distributions. Otherwise memory leaks and x86_64 would be the least of your concerns. Honestly, you're waiting for Varnish to stabilize and you're running v1? My data point: 5 months and over 100PB of transfers, and 2.0.4 is stable and has never leaked in our pure x86_64 production environment. Its memory use can be precisely monitored and controlled between Varnish configuration and the OS environment by any competent sysadmin, IMHO. We actually can't use Squid at all because it really does leak like a sieve. pmap does not lie. I just hope that people that have problems with any software are taking on the responsibility of diagnosing their own environments as much as they expect any OSS project to diagnose its code -- the former is just as often the problem as the latter. -- Ken On Nov 5, 2009, at 12:22 PM, cripy wrote: I experienced this same issue under x64. Varnish seemed great but once I put some real traffic on it under x64 the memory leaks began and it would eventually crash/restart. Ended up putting Varnish on the back burner and have been waiting for it to stabilize before even trying to present it to upper management again. Varnish has great potential but until it can run stable under x64 it's got a long fight ahead of itself. (I do want to note that my comments are based mainly on varnish 1 and not varnish 2.0) --cripy ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Varnish virtual memory usage
Our load balancer transforms all connections from keep-alive to close. So keep-alive connections arent the issue here. Also, if I limit the thread count I still see the same behavior. -Oorspronkelijk bericht- Van: Ken Brownfield [mailto:k...@slide.com] Verzonden: donderdag 5 november 2009 0:31 Aan: Henry Paulissen CC: varnish-misc@projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Looks like varnish is allocating ~1.5GB of RAM for pure cache (which may roughly match your "-s file" option) but 1,610 threads with your 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the footprint of this instance as roughly 3.6GB, and I'm assuming top/ps agree with that number. Unless your "-s file" option is significantly less than 1-1.5GB, the sheer thread count explains your memory usage: maybe using a stacksize of 512K or 256K could help, and/or disable keepalives on the client side? Also, if you happen to be using a load balancer, TCP Buffering (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically reduce the thread count (and they can handle the persistent keepalives as well). But IMHO, an event-based (for example) handler for "idle" or "slow" threads is probably the next important feature, just below persistence. Without something like TCP buffering, the memory available for actual caching is dwarfed by the thread stacksize alloc overhead. Ken On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: > I attached the memory dump. > > Child processes count gives me 1610 processes (on this instance). > Currently the server isnt so busy (~175 requests / sec). > > Varnishstat -1: > = > = > = > = > = > = > == > == > = > = > = > = > = > = > == > == > uptime 3090 . Child uptime > client_conn435325 140.88 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess > client_req 435294 140.87 Client requests received > cache_hit 4574014.80 Cache hits > cache_hitpass 0 0.00 Cache hits for pass > cache_miss 12644540.92 Cache misses > backend_conn 355277 114.98 Backend conn. success > backend_unhealthy0 0.00 Backend conn. not > attempted > backend_busy0 0.00 Backend conn. too many > backend_fail0 0.00 Backend conn. failures > backend_reuse 3433111.11 Backend conn. reuses > backend_toolate 690 0.22 Backend conn. was closed > backend_recycle 3502111.33 Backend conn. recycles > backend_unused 0 0.00 Backend conn. unused > fetch_head 0 0.00 Fetch head > fetch_length 384525 124.44 Fetch with Length > fetch_chunked2441 0.79 Fetch chunked > fetch_eof 0 0.00 Fetch EOF > fetch_bad 0 0.00 Fetch had bad headers > fetch_close 2028 0.66 Fetch wanted close > fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed > fetch_zero 0 0.00 Fetch zero len > fetch_failed0 0.00 Fetch failed > n_sess_mem989 . N struct sess_mem > n_sess 94 . N struct sess > n_object89296 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore89640 . N struct objectcore > n_objecthead25379 . N struct objecthead > n_smf 0 . N struct smf > n_smf_frag 0 . N small free smf > n_smf_large 0 . N large free smf > n_vbe_conn 26 . N struct vbe_conn > n_wrk1600 . N worker threads > n_wrk_create 1600 0.52 N worker threads created > n_wrk_failed0 0.00 N worker threads not > created > n_wrk_max1274 0.41 N worker threads limited > n_wrk_queue 0 0.00 N queued work requests > n_wrk_overflow 1342 0.43 N overflowed work requests > n_wrk_drop 0 0.00 N dropped work requests > n_backend 5 . N backends > n_expired1393 . N expired objects > n_lru_nuked 35678 . N LRU nuked objec
RE: Back from the dea^H^H^Hsoul-less
Google translate is very nice in this case :) As Dutchman my Danish isn't so superb. Regards -Oorspronkelijk bericht- Van: p...@critter.freebsd.dk [mailto:p...@critter.freebsd.dk] Namens Poul-Henning Kamp Verzonden: donderdag 5 november 2009 0:08 Aan: Henry Paulissen CC: varnish-misc@projects.linpro.no Onderwerp: Re: Back from the dea^H^H^Hsoul-less In message <002c01ca5da3$77a93230$66fb96...@paulissen@qbell.nl>, "Henry Pauliss en" writes: >Windows-refund case?=BF? >Did i miss something? http://phk.freebsd.dk/MicrosoftSkat/ You should be able to read it :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Back from the dea^H^H^Hsoul-less
Windows-refund case?¿? Did i miss something? Anyway, ood luck with your case. Regards. -Oorspronkelijk bericht- Van: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] Namens Poul-Henning Kamp Verzonden: donderdag 5 november 2009 0:00 Aan: varnish-misc@projects.linpro.no Onderwerp: Back from the dea^H^H^Hsoul-less Hi Guys, I owe you all an apology for disappering for the last couple of weeks, but I had to spend pretty much all my time writing my reply in my Windows-refund case against Lenovo. Tomorrow I'll drop off the result at the court-house, and than I should be able to ignore that until X-mas, when Lenovo is supposed to reply. And then it's back to hacking varnish... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Varnish virtual memory usage
No, varnishd still usages way more than allowed. The only solutions I found at the moment are: Run on x64 linux and restart varnish every 4 hours (crontab). Run on x32 linux (all is working as expected but you cant allocate more as 4G each instance). I hope linpro will find this issue and address it. Again @ linpro: if you need a machine (with live traffic) to run some tests, please contact me. We have multiple machines in high availability, so testing and rebooting a instance wouldnt hurt us. Regards. -Oorspronkelijk bericht- Van: Rogério Schneider [mailto:stoc...@gmail.com] Verzonden: woensdag 4 november 2009 22:04 Aan: Henry Paulissen CC: Scott Wilson; varnish-misc@projects.linpro.no Onderwerp: Re: Varnish virtual memory usage On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen wrote: > I will report back. Did this solve the problem? Removing this? >>if (req.http.Cache-Control == "no-cache" || req.http.Pragma == "no-cache") { >>purge_url(req.url); >>} >> Cheers Att, -- Rogério Schneider MSN: stoc...@hotmail.com GTalk: stoc...@gmail.com Skype: stockrt http://stockrt.github.com ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Varnish virtual memory usage
Didnt did the trick though. # # uptime 13662 . Child uptime client_conn95975570.25 Client connections accepted client_drop 0 0.00 Connection dropped, no sess client_req 95974770.25 Client requests received cache_hit 47088134.47 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 91401 6.69 Cache misses backend_conn 40236729.45 Backend conn. success backend_unhealthy0 0.00 Backend conn. not attempted backend_busy0 0.00 Backend conn. too many backend_fail 15 0.00 Backend conn. failures backend_reuse 87098 6.38 Backend conn. reuses backend_toolate 7263 0.53 Backend conn. was closed backend_recycle 94363 6.91 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 47544734.80 Fetch with Length fetch_chunked5867 0.43 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed1 0.00 Fetch failed n_sess_mem495 . N struct sess_mem n_sess 46 . N struct sess n_object41675 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore41902 . N struct objectcore n_objecthead35695 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 8 . N struct vbe_conn n_wrk1600 . N worker threads n_wrk_create 1600 0.12 N worker threads created n_wrk_failed0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow809 0.06 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 5 . N backends n_expired 49304 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved198381 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr21 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 95444369.86 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 95974970.25 Total Sessions s_req 95974770.25 Total Requests s_pipe 0 0.00 Total pipe s_pass 39806129.14 Total pass s_fetch48131335.23 Total fetch s_hdrbytes 327272320 23954.93 Total header bytes s_bodybytes1551538833113566.01 Total body bytes sess_closed95974070.25 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 9 0.00 Session herd shm_records 64046389 4687.92 SHM records shm_writes4351501 318.51 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 4212 0.31 SHM MTX contention shm_cycles 26 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree0 . bytes free sma_nreq 57207141.87 SMA allocator requests sma_nobj83345 . SMA outstanding allocations sma_nbytes 499243475 . SMA outstanding bytes sma_balloc 1781391844 . SMA bytes allocated sma_bfree 1282148369
RE: Varnish virtual memory usage
Awh, Thank you for your comment. I'll make a test case of it tomorrow (or else after the weekend). I will report back. -Original Message- From: Scott Wilson [mailto:sc...@idealist.org] Sent: donderdag 22 oktober 2009 8:52 To: Henry Paulissen Cc: varnish-misc@projects.linpro.no; k...@slide.com Subject: Re: Varnish virtual memory usage We had a similar problem where varnish would fill all swap and crash every couple of weeks. The trick that seems to have solved the problem was to remove purge.url from our VCL (a lot of badly behaved clients send a lot more no-cache headers than necessary). We replaced purge.url with an approach that sets the object's ttl to zero and restarts the request. The details are here: http://varnish.projects.linpro.no/wiki/VCLExampleEnableForceRefresh In our case we're using FreeBSD 7.2 64-bit. All that said, it doesn't seem that this solution jives with Roi's random url test unless purge.url figured in his vcl / testing script. cheers, scott 2009/10/22 Henry Paulissen : > We ran CentOS 5.3 X64 when we noticed this strange behavior. Later on we > moved to Fedora core 11 X64. But we where still noticing the same memory > allocation problems. Later on we reinstalled the server with vmware to run a > couple of (half live a.k.a. beta) tests and noticed it isn’t happening under > fedora core 11 x32. > > We do about 3000 connections/sec for static content (smaller images). For > large images (> 200kb), javascript and css we have another instances running > (all having the same issues, but im going to tell you about the static > content instance). > > Hitrate is close to the 100% (99-100%). > Server core's: 16 > Memory: 24GB (VM host server is upgraded to 64GB ram and only doing varnish > guests on malloc, so I doubt there's a real performance impact) > > > Tried changing the number of thread_pools and workers, nothing helped. > Did the sysctl recommended settings. Disabled conntrack filter in iptables. > > All incoming requests are with the "connection: close" close header (we have > a high availability server above it, who doesn’t allow keep-alive > connections. So he transforms every connection to close). > > Both storage type's where used. > > I did noticed something when I changed the lru_interval to 60. The reserved > memory was keeping within his limits (before this changing this setting it > grow way above max limit). But virtual memory is still way above memory the > limit. > > If we didn’t restart varnish every few hours it grow above the physical > memory limit and starts using the swap space. If the varnish server was > restarted it freed up the memory. > > Tried both stable and svn versions. > > > My VCL for static: > > # > # > > director staticbackend round-robin { >{ >.backend = { >.host = "192.168.x.x"; >.port = "x"; >.connect_timeout = 2s; >.first_byte_timeout = 5s; >.between_bytes_timeout = 2s; >} >} >{ >.backend = { >.host = "192.168.x.x"; >.port = "x"; >.connect_timeout = 2s; >.first_byte_timeout = 5s; >.between_bytes_timeout = 2s; >} >} > } > > sub vcl_recv { >set req.backend = staticbackend; > >if (req.request != "GET" && req.request != "HEAD" && req.request != > "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != > "OPTIO > NS" && req.request != "DELETE") { >/* >Non-RFC2616 or CONNECT which is weird. >Shoot this client, but first go in pipeline to the > webserver. >Maybe he knows what to do with this request. >*/ > >return (pipe); >} > >remove req.http.X-Forwarded-For; >remove req.http.Accept-Encoding; >remove req.http.Accept-Charset; >remove req.http.Accept-Language; >remove req.http.Referer; >remove req.http.Accept; >remove req.http.Cookie; > >return (lookup); > } > > sub vcl_pipe { >set
RE: Varnish virtual memory usage
"} obj.status " " obj.response {" Error "} obj.status " " obj.response {" "} obj.response {" Guru Meditation: XID: "} req.xid {" "}; return (deliver); } # ##### For further details see my ticket: http://varnish.projects.linpro.no/ticket/546 @Kristian: When the programmers / engineers have some spare time over, they are always welcome to see it in live action. -Oorspronkelijk bericht- Van: Ken Brownfield [mailto:k...@slide.com] Verzonden: woensdag 21 oktober 2009 21:57 Aan: Henry Paulissen CC: varn...@projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Small comments: 1) We're running Linux x86_64 exclusively here under significant load, with no memory issues. 2) Why don't you compile a 32-bit version of Varnish; wouldn't this have the same effect without the RAM and performance hit of VMs? 3) Do you make heavy use of purges? -- kb On Oct 21, 2009, at 6:22 AM, Henry Paulissen wrote: > We encounter the same problem. > > Its seems to occur only on x64 platforms. > We decided to take a different approach and installed vmware to the > machine. > Next we did a setup of 6 guests with x32 PAE software. > > No strange memory leaks occurred since then at the price of small > storage (3.5G max) and limited worker threads (256 max). > > Opened a ticket for the problem, but the wont listen until I buy a > support contract (á €8K). > Seems they don’t want to know there is some kind of memory issue in > their software. > > Anyway... > Varnish is running stable now with some few tricks. > > > Regards, > > -Original Message- > From: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc- > boun...@projects.linpro.no] On Behalf Of Kristian Lyngstol > Sent: woensdag 21 oktober 2009 13:34 > To: Roi Avinoam > Cc: varnish-misc@projects.linpro.no > Subject: Re: Varnish virtual memory usage > > On Mon, Sep 21, 2009 at 02:55:07PM +0300, Roi Avinoam wrote: >> At Metacafe we're testing the integration with Varnish, and I was >> tasked with benchmarking our Varnish setup. I intentionally >> over-flooded the server with requests, in an attempt to see how the >> system will behave under extensive traffic. Surprisingly, the server >> ran out of swap and crashed. > > That seems mighty strange. What sort of tests did you do? > >> In out configuration, "-s file,/var/lib/varnish/varnish_storage.bin, >> 1G". >> Does it mean Varnish shouldn't use more than 1GB of the virtual >> memory? >> Is there any other way to limit the memory/storage usage? > > If you are using -s file and you have 4GB of memory, you are telling > Varnish to create a _file_ of 1GB, and it's up to the kernel what it > keeps in memory or not. If you actually run out of memory with this > setup, you've either hit a bug (need more details first), or you're > doing something strange like having the mmaped file (/var/lib/ > varnish/) in tmpfs with a sizelimit less than 1GB or something along > those lines. But I need more details to say anything for certain. > > -- > Kristian Lyngstøl > Redpill Linpro AS > Tlf: +47 21544179 > Mob: +47 99014497 > > > ___ > varnish-misc mailing list > varnish-misc@projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Varnish virtual memory usage
Maybe it was a bit rough to say indeed... My apologies for that one. What im saying a longer time for now (also sayed to Paul by phone). I'm simply not making enough profit for €8k, but im not saying I don’t want to pay anything for service / support. It’s simply not within my reach and for me it's cheaper to run 6 guests in a big vmware server as paying €8K and get the problem (maybe) solved. Anyway, this is way to offtopic for this thread. I shared my experiences and solutions Roi can consider it as a solution for him. Regards, Henry Paulissen -Original Message- From: 'Kristian Lyngstol' [mailto:krist...@redpill-linpro.com] Sent: woensdag 21 oktober 2009 15:43 To: Henry Paulissen Cc: varn...@redpill-linpro.com Subject: Re: Varnish virtual memory usage (ticket #546) On Wed, Oct 21, 2009 at 02:57:34PM +0200, Henry Paulissen wrote: > Opened a ticket for the problem, but the wont listen until I buy a > support contract (á €8K). Seems they don’t want to know there is some > kind of memory issue in their software. The ticket is not closed, we have, however, not been able to reproduce this as we point out in the ticket. Until we can either reproduce this ourself or get more data (more reports of the same issue, for instance), there really isn't much we can do. For a service agreement customer, we would most likely use their system to reproduce the issue and take it from there. You will understand if we do not log into your system for free to solve a problem which so far has been reported by two people. We do take the issue seriously, but memory leaks that only occur on a specific setup that we do not have access to is nearly impossible to track down. We could read through and verify our code for a year and still not find the bug. Service agreements help sponsor the development of Varnish, in return you get priority on bugs - even the ones that are difficult to track down. We do not require anyone to pay for our service agreements to use Varnish or report bugs, and we do not ignore bug reports from non-paying Varnish users. As you may notice, we (Tollef, Poul-Henning and myself) offer a great deal of support for free on the mailing lists and on IRC, so I think it's a bit unfair to state that we do not care unless you pay for a service agreement, even if we weren't able to help in your specific case. The offer of a service agreement in the ticket was not meant to be an entry-fee to the bug tracker, but rather a means to make us prioritize a complicated bug that we would otherwise have to put on hold. I'm sorry if that didn't come across clearly. -- Kristian Lyngstøl Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: Varnish virtual memory usage
We encounter the same problem. Its seems to occur only on x64 platforms. We decided to take a different approach and installed vmware to the machine. Next we did a setup of 6 guests with x32 PAE software. No strange memory leaks occurred since then at the price of small storage (3.5G max) and limited worker threads (256 max). Opened a ticket for the problem, but the wont listen until I buy a support contract (á €8K). Seems they don’t want to know there is some kind of memory issue in their software. Anyway... Varnish is running stable now with some few tricks. Regards, -Original Message- From: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] On Behalf Of Kristian Lyngstol Sent: woensdag 21 oktober 2009 13:34 To: Roi Avinoam Cc: varnish-misc@projects.linpro.no Subject: Re: Varnish virtual memory usage On Mon, Sep 21, 2009 at 02:55:07PM +0300, Roi Avinoam wrote: > At Metacafe we're testing the integration with Varnish, and I was > tasked with benchmarking our Varnish setup. I intentionally > over-flooded the server with requests, in an attempt to see how the > system will behave under extensive traffic. Surprisingly, the server > ran out of swap and crashed. That seems mighty strange. What sort of tests did you do? > In out configuration, "-s file,/var/lib/varnish/varnish_storage.bin,1G". > Does it mean Varnish shouldn't use more than 1GB of the virtual memory? > Is there any other way to limit the memory/storage usage? If you are using -s file and you have 4GB of memory, you are telling Varnish to create a _file_ of 1GB, and it's up to the kernel what it keeps in memory or not. If you actually run out of memory with this setup, you've either hit a bug (need more details first), or you're doing something strange like having the mmaped file (/var/lib/varnish/) in tmpfs with a sizelimit less than 1GB or something along those lines. But I need more details to say anything for certain. -- Kristian Lyngstøl Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: SMA outstanding allocations
Is this bad when the number is high and increasing every second? -Oorspronkelijk bericht- Van: p...@critter.freebsd.dk [mailto:p...@critter.freebsd.dk] Namens Poul-Henning Kamp Verzonden: donderdag 1 oktober 2009 14:56 Aan: Henry Paulissen CC: varnish-misc@projects.linpro.no Onderwerp: Re: SMA outstanding allocations In message <003201ca4269$297408b0$7c5c1a...@paulissen@qbell.nl>, "Henry Pauliss en" writes: >I would like to have some info about SMA outstanding allocations. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
SMA outstanding allocations
I would like to have some info about SMA outstanding allocations. What is the meaning of it? What does it mean if that number is high / increasing with the second without decreasing over time? What are the configuration options regarding to this item? My guess is that it is how many objects there are in a temporary table (between fetch and the lru), in waiting state to be written to the lru. If this is true: What does it mean? Is it that my lru is locked most of the time and therefore cant be written? Does the maximum storage option (-s malloc,5G) also affects this storage or isn't this storage checked for size? Does the cleanup processes (duplicate content check, remove expired content, etc) also check this list? Regards, Henry Paulissen ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Thank you varnish team
Because there is no other real place to do this, im going to rubbish this maillist. I would like to thank everybody who is involved in the development of varnish. It's a super product and its performance is outstanding (especially if your used to squid). First time we struggled some bid with the config, but it sure is flexible and highly customizable. We are currently doing a 5000 connections per second on a regular day (regular static images), and 500 connections per second for big photo's (1280x1024). Both servers are in a seperate server daemon on the same physical host. Before this setup we used 2 physical lighthttpd servers to serve all the images, but on the most busiest hours it was a bit laggy and load times varied from 200ms to 5s a images. Most likely this is due the fact there are some problems about threading in lighthttpd (it only uses 1 thread). With our new varnish setup we have 2 physical servers serving cache miss images and a varnish server who caches it. We choose for 2 backend servers for the redundancy we could let 1 server serve all images. In the near future we will go to more varnish servers to add redundancy and maybe we are going to make a CDN with it. Varnish server: Intel XEON 3.2GHZ 4GB Memory CPU Load: Cpu0 : 2.1% us, 1.1% sy, 0.0% ni, 96.3% id, 0.4% wa, 0.0% hi, 0.0% si Cpu1 : 0.5% us, 0.4% sy, 0.0% ni, 98.6% id, 0.5% wa, 0.0% hi, 0.0% si Cpu2 : 2.8% us, 0.9% sy, 0.0% ni, 96.2% id, 0.1% wa, 0.0% hi, 0.0% si Cpu3 : 0.3% us, 0.2% sy, 0.0% ni, 99.1% id, 0.3% wa, 0.0% hi, 0.0% si Maybe I could install boinc on it, so it can crunch some cpu to find ET :p. Keep up the development. But watch out you aren't making a huge loggy squid from it, with features nobody is using ;). My customer prefers to stay anonymous. But I can say he's in top 500 of alexa world ranking. Regards, Henry ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc