Re: locking SHMFILE in core failed: Cannot allocate memory
In message 3e3a0c981002231659h2181697bpd8b1ebf97707a...@mail.gmail.com, Tami Lee writes: When I start Varnishd, I got the error below. Varnish still seems to work, but what does the error message mean? More importantly, what may not be working? Notice: locking SHMFILE in core failed: Cannot allocate memory Don't worry about it, it's just noise. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Varnish CLI user feedback, please.
I'm looking at the CLI/varnishadm stuff right now, and would like some feedback from you guys... Right now (in -trunk) we have these possible CLI configurations: A) no CLI at all. B) CLI on stdin (-d) C) CLI on TELNET (-T) D) CLI on call-back (-M) If the -S option is given, -T/-M CLI connections will require challenge/response authentication. (Before 2.1, I plan to add -S support to varnishadm, and maybe some API functions to access the CLI in libvarnishapi.) Which of these modes do you actually use ? Are more modes needed ? Any other insights on the CLI interface ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
RE: health check path doesn't change after VCL reload (2.0.6)
I remember seeing a post recently that mentioned that the old health checks are still performed as long as the old VCL is loaded. This is done to allow a quick switch back to the previous VCL. This seems likely from behavior I have seen. Jim -Original Message- From: varnish-misc-boun...@projects.linpro.no [mailto:varnish-misc-boun...@projects.linpro.no] On Behalf Of John Norman Sent: Wednesday, February 24, 2010 2:37 PM Cc: varnish-misc@projects.linpro.no Subject: health check path doesn't change after VCL reload (2.0.6) Hi. We notice that after VCL is reloaded, our old health check path is still getting checked. The only thing that seems to fix it is a varnish restart. Seems like I should log this as a bug . . . ? John ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: health check path doesn't change after VCL reload (2.0.6)
That's great. Still, the VCL indicated as active had a different path for the health check. On Wed, Feb 24, 2010 at 3:24 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote: In message b6b8b6b71002241137j71ae210r35487c328e8f6...@mail.gmail.com, John N orman writes: We notice that after VCL is reloaded, our old health check path is still getting checked. The only thing that seems to fix it is a varnish restart. No, unloading the old VCL code should also do it. We keep polling the backends of all loaded VCL, so they are all ready to roll the moment you do vcl.use mumble. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: health check path doesn't change after VCL reload (2.0.6)
In message b6b8b6b71002241315w1c62022t1bf941d6f2cac...@mail.gmail.com, John N orman writes: Still, the VCL indicated as active had a different path for the health check. Hopefully both got probed ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
host header altered by Varnish?
Sorry about all the questions . . . On my backend I want to redirect domain.com to www.domain.com I see Host: domain.com in both the RX and TX sections of the log . . . but the redirect isn't getting triggered. The backend is Apache, and the redirect directives are routine. RewriteCond %{HTTP_HOST} ^domain.com$ [NC] RewriteRule ^(.*)$ http://www.domain.com$1 [R=301,L] Am I missing something? John ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: health check path doesn't change after VCL reload (2.0.6)
No, only the former / old path. I'm not super-troubled right now because a Varnish restart did pick up the new path (but at the cost of my cache) -- but I'm a bit worried about the next time I have to change it. I will be changing the probe interval soon, so that will give me a chance to reproduce the problem, if it even exists. As a bit of background: I automate the VCL update to multiple servers, when/if the VCL file has changed. Before the update, I also remove all of the inactive/old VCL's that are sitting there. Then I add the new one and use it. When I observed in my backend logs the probes going to the old URLs, I did check the active VCL on all systems, and they all showed the new path. In any case, I will try to reproduce and will send the results. One last thing: During the restart on one system, I observed the issue reported here: http://zarathustrashallspeak.com/2009/11/28/varnish-startup-issue/ John On Wed, Feb 24, 2010 at 4:18 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote: In message b6b8b6b71002241315w1c62022t1bf941d6f2cac...@mail.gmail.com, John N orman writes: Still, the VCL indicated as active had a different path for the health check. Hopefully both got probed ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 p...@freebsd.org | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: Varnish CLI user feedback, please.
On Wed, Feb 24, 2010 at 7:37 PM, Poul-Henning Kamp p...@phk.freebsd.dkwrote: I'm looking at the CLI/varnishadm stuff right now, and would like some feedback from you guys... Right now (in -trunk) we have these possible CLI configurations: A) no CLI at all. Probably useful for a lot of simple uses. Some CMS might ship Varnish as part of it and there won't a need for CLI. B) CLI on stdin (-d) C) CLI on TELNET (-T) D) CLI on call-back (-M) I really like this one. If we could have an option to let Varnish start without the cache running (like -d) one could picture some sort of service accepting connections from newly started varnishd servers. The service would then configure the caches, provision them with VCL and start them up. The need for messing around with having shell and config files on the caches disappears. -- Per Andreas Buer, CEO, Varnish Software AS Phone: +47 21 54 41 21 / Mobile: +47 958 39 117 / skype: per.buer ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Re: host header altered by Varnish?
I have the same stuff on my backends and what you have here should work. The only thing I can think of from your example is a missing 'RewriteEngine on' on top. If I don't have that the rewriting is silently ignored by apache. On 24-Feb-2010 22:21, John Norman wrote: Sorry about all the questions . . . On my backend I want to redirect domain.com http://domain.com to www.domain.com http://www.domain.com I see Host: domain.com http://domain.com in both the RX and TX sections of the log . . . but the redirect isn't getting triggered. The backend is Apache, and the redirect directives are routine. RewriteCond %{HTTP_HOST} ^domain.com http://domain.com$ [NC] RewriteRule ^(.*)$ http://www.domain.com$1 [R=301,L] Am I missing something? John ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc ___ varnish-misc mailing list varnish-misc@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc
Varnish 2.0.6 nuking all my objects?
Howdy, We are finally getting around to upgrading to the latest version of varnish and are running into quite a weird problem. Everything works fine for a bit (1+day) , then all of a sudden Varnish starts nuking all of the objects from the cache: About 4 hours ago there were 1 million objects in the cache, now there are just about 172k. This looks a bit weird to me: sms_nbytes 18446744073709548694 . SMS outstanding bytes Here are the options I am passing to varnishd: /usr/local/sbin/varnishd -a 0.0.0.0: -f /etc/varnish/varnish.vcl -P /var/run/varnishd.pid -T 0.0.0.0:47200 -t 600 -w 1,200,300 -p thread_pools 4 -p thread_pool_add_delay 2 -p lru_interval 60 -h classic,59 -p obj_workspace 4096 -s file,/varnish/cache,150G /varnish is 2 x 80GB Intel X-25M SSDs in a software RAID 0 array. OS is Debian Lenny 64-bit. There is plenty of space: /dev/md0 149G 52G 98G 35% /varnish Here is the output of varnishstat -1 uptime 134971 . Child uptime client_conn 1205103789.29 Client connections accepted client_drop 0 0.00 Connection dropped, no sess client_req 1204867289.27 Client requests received cache_hit1016127275.28 Cache hits cache_hitpass 133244 0.99 Cache hits for pass cache_miss175085712.97 Cache misses backend_conn 182459413.52 Backend conn. success backend_unhealthy0 0.00 Backend conn. not attempted backend_busy0 0.00 Backend conn. too many backend_fail 3644 0.03 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 5309 0.04 Fetch head fetch_length 181642213.46 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 16 0.00 Fetch failed n_srcaddr 0 . N struct srcaddr n_srcaddr_act 0 . N active struct srcaddr n_sess_mem578 . N struct sess_mem n_sess414 . N struct sess n_object 172697 . N struct object n_objecthead 173170 . N struct objecthead n_smf 471310 . N struct smf n_smf_frag 62172 . N small free smf n_smf_large 67978 . N large free smf n_vbe_conn 18446744073709551611 . N struct vbe_conn n_bereq 315 . N struct bereq n_wrk 76 . N worker threads n_wrk_create 3039 0.02 N worker threads created n_wrk_failed0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 25136 0.19 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 4 . N backends n_expired 771687 . N expired objects n_lru_nuked744693 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 8675178 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr25 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 1174941587.05 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 1205100789.29 Total Sessions s_req1205018489.28 Total Requests s_pipe 2661 0.02 Total pipe s_pass 134858 1.00 Total pass s_fetch 182172113.50 Total fetch s_hdrbytes 3932274894 29134.22 Total header bytes s_bodybytes 894452020319 6626994.10 Total body bytes sess_closed 1205092589.29 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 160 0.00 Session herd shm_records 610011852