On Wed, Mar 24, 2010 at 1:27 PM, Karanbir Singh <mail-li...@karan.org>wrote:

> Use this as a reference platform, eg if you can get 25% better performance
> by moving all your static content ( eg. graphics, css, js ) into a tempfs,
> that will translate to a about the same level[1] boost in production as
> well.


What are the benefits using these static media in tempfs (in WebServer
Environment)? Can you please show some examples/cases? Anyhow, as I know,
modern servers load the content in memory, delivers it to client, and keeps
the cache in RAM for reducing disk level IO.. This can clearly be seen, a
long running apache just fills more and more RAM even if the load reduces.

If we use tempfs, we are keeping just copies of static content in memory -
one sure wastage of precious memory.

Assuming (and probably its fact), when your site gets popularity,  your
Static Media content collection will become huge with the time. So, I don't
think, tempfs should really be in scene.


If you have a quad core, perhaps locking down 2 cpu cores and dedicating
> them for the mysqldb might be a good idea.


This is a good idea, again if you are using same system for WebServer and
Database server. But keep them apart, life would be much easier (and cost
would be little expensive).

Similarly, separating the i/o for http and mysqld at the block device level
> might be something you want to look into.
>

What you meant by this? Do you want to say "Keeping them in different
partitions or disks would be beneficial"?.


> These are just some generic options you can look at - before you move into
> app level optimisations. And there are many many things that could be done
> there as well. Simple things like - are you running the webroots with atime
> enabled / ensure that db index's are valid and the querylog isnt reporting
> runtimes greater than a second for most things....
>
> Hope this helps and gets you thinking along a more formalised path.
>

My suggestions would be "take the same route, what the others have taken,
and is already proved. Read "mysql performance blog" for tuning mysql. Then
read the case-studies and tricks on highscalability.com
You will find them really nice and proven, although you won't need that
much, but you will get insights.


Now, if I have to do something like this, I start with Browser. I
install *Firebugplugin
+ *Mozilla Firefox, enable the Net section, and refresh/load my webapp page.
It shows how much data is recieved, and what are those data.

Most static contents are then put in a rule to "not expire", so browsers
won't hit the server for same data. Take an example, jQuery javascript
plugin is around 60KB (minimized version). But 60KB is too much for me. I
enable GZip compression on the webserver, so when a browser sends requests
with header saying it knows/supports GZip compression, my server just
compresses the JS (or any content which can be compressed - more effectively
texts). The size becomes too less, around 16KB (in transfer). When browser
gets it, it decompresses and presents to the page. That saves bandwidth, and
a lot time.

Now, since I am not going to update my jQuery, CSS, JS and images (which are
static to site), i write the rule in apache config to deliver them with
expiry time set in months. So next time browser wont hit my server for
getting the jQuery or other expensive media files.

After that - I look into either DB or App level for tuning. But if I have to
use the caching, I use it from the beginning of development. And what I put
in cache (like memcache) -

   - Rendered page blocks : Today we use Template languages in every
   framework. They are comparitively slow, takes a lot cpu in rendering the
   content. So good idea would be to save those cpu ticks :)
   I just get my templates rendered and push that in cache server, giving
   expiry time depending on how dynamic presentation is. Even if I give 5 mins
   for Homepage template blocks, it saves a lot CPU.
   - I enable slow_db_logging in mysql config, so taht I can get those
   queries which takes more than 1 sec (and 1 sec too is really evil). Getting
   all those queries, I try to find how to optimize, either at application
   level, or query level or db level.
   - In application, when I fetch a query result, I store them in memcache
   server. When I change something at users-request to that table, I do
   invalidate those cached result. That way, data is always fresh till you
   follow the rule "One Entry and Failure point" at application level.
   - I prefer only those servers for delivering my static contents which are
   based on libevent. That are really fast in processing multiple requests
   concurrently.

But in Tanveer's case, he is using vBulletine, a ready made package. And he
won't want to spend his time in fine tuning at application level. So he need
to play at hardware, server (apache + mysql) and content delivery by other
server like nginix. He can also search Google to find way to attach caching
server into vBulletine:

   -
   
http://www.howtoforge.com/using-memcached-with-your-vbulletin-forum-to-reduce-server-load-debian-etch
   -
   
http://www.vbulletin.com/forum/entry.php?2391-Supercharge-your-vBulletin-Forum-with-Memcached

And have a lot RAMs :) as much as you can afford for the server.
-- 
-=Ravi=-
_______________________________________________
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd

Reply via email to