On Mar 26, 2008, at 4:15 PM, Konstantin Chuguev wrote:

Can you please clarify your mentioning the bucket-brigade footprint? Are they so slow they make memory-based cache no more efficient then disk-based one? Or the opposite: sendfile() works so well that serving content from memory is not any faster?

No - they are very fast (in an absolute sense) - and your approach is almost certainly the right one.

However all-in-all there is a lot of logic surrounding them; and if you are trying to squeeze out the very last drop (e.g. the 1x1 gif example) - you run into all sorts of artificial limits, specifically on linux and 2x2 core machines; as the memory which needs to be accessed is just a little more scattered than one would prefer and all sort of competition around the IRQ handling in the kernel and so on.

Or in other words - in a pure static case where you are serving very small files which rarely if ever change, have no variance to any inbound headers, etc - things are not ideal.

But that is a small price to pay - i.e. apache is more of a swiss army knife; which saw's OK, but a proper hacksaw is 'better'.

I'm developing an Apache output filter for highly loaded servers and proxies that juggles small-size buckets and brigades extensively. I'm not at the stage yet where I can do performance tests but if I knew this would definitely impact performance, I would perhaps switch to fixed-size buffers straight away...


I'd bet you are on the right track. However there is -one- small concern; sometimes if you have looooots of buckets and very chunked output - then one gets lots and lots of 1-5 byte chunks; each prefixed by the length byte. And this can get really inefficient.

Perhaps we need a de-bucketer to 'dechunk' when outputting chunked.

Dw

Reply via email to