On 03/25/2010 04:09 AM, Ravi Kumar wrote:
    erm, no - you seem confused about exactly how fs cache and tmpfs works.


Please enlighten me. :)

this is what you said :
--
Anyhow, as I know, modern servers load the content in memory, delivers it to client, and keeps the cache in RAM for reducing disk level IO.. This can clearly be seen, a long running apache just fills more and more RAM even if the load reduces.
--

I perhaps over trimmed my reply.


And, I didn't said anything about fs cache...

you need to reread your own email :)

I really would like to know why tmpfs would be the corner to look in for
Webserver performance gains.

its one of many areas that would help in shifting data out of your service. It gets a bit more complex when you get into large storage areas (eg. on a webserver with 128 gigs of ram, partitioning out 32 gigs for the master db, with 8gb dedicated for 4 replica instances will give you a mysql query rate thats about 21 times higher than a single mysql instance hitting disk. This is using bog standard mysql configs. One can tune some specific areas up, but cascade that level of tuning down to a multi node setup and you will still see a higher read-query rate. And none of these are specifics that apply across the board to any and every app, you will need to test and tune to meet a specific requirement - but these are good places to look.

For these cases, the word "Exception" is used :) ? Don't you agree. If
you put some generalization, others should not disagree because they
found one or two of cases where generalization failed. I didn't stated
the Universal Truth.

right, I am assuming that common sense isnt that elusive. trying to shoehorn a Terrabyte into tmpfs would be exceptionally counter productive.


But in real -
Twitter doesnt let its user upload/showcase their media/pictures etcs
with their tweets. But users have ability to upload and change their
background which can be any picture less than 800KB in size. And Twitter
has more than 350,000 users (as per highscalability.com
<http://highscalability.com>). Even 1% users uploaded content makes a
huge contribution to their static media size. Twitter started using
Amazon AWS as CDN. It clearly proves they have a good amount of static
media.

You are basing this on pure assumption. I know, for a fact, that youtube's most popular content comes from ram and not disk. So going on your rationalisation, I guess we can all assume that youtube does not have large content ?

btw, are you sure Twitter are using CloudFront ? afaik, twitter only use the s3 storage backend.

Read this, http://developer.yahoo.com/performance/rules.html
This is really a nice information. Everything well said and tested and
used in real life rather than theories.

The nature of the original post conveyed a sense of little or no control over content supply and code control and the effort seems to be directed at resource control - which is what my whole point was, dont focus on resource control, but work on the same problem from the other end. ie Improve a delivery rate and maximise resource usage.

- KB

_______________________________________________
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd

Reply via email to