Re: [ilugd] Setup terminal server

2010-03-24 Thread Karanbir Singh

On 03/23/2010 07:25 PM, Varun Mittal wrote:

I m in jp university and i was planning to set up a linux terminal server in 
the college, i have taken a public ip accessible all over the college. Only 
problem is that i cannot make changes to the dhcp server. Please suggest a 
reliable way to setup a tftp server.



unless your dhcp server is able to hand out bootp paths, pxe isnt going 
to work very well ( which is what I am guessing you meant by tftp server 
- the tftp process is only one part of the network bootstrap ).


if you are able to run any service on the network, look at rarpd, which 
would allow you to hardwire a machine to a specific IP and also pass 
along bootp info


- KB

___
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd


Re: [ilugd] internship with linux group

2010-03-24 Thread Karanbir Singh

On 03/19/2010 08:24 PM, vivek gupta wrote:

whoever can help i want to know how i can utilise my summer holidays
with a linux group and in mean while do my summer internship
computer engg 3rd year
any information is welcome
thanks in advance


have you considered looking up the google summer of code process ?

- KB

___
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd


Re: [ilugd] Open source Web Server load monitoring tools

2010-03-24 Thread Ravi Kumar
On Wed, Mar 24, 2010 at 1:27 PM, Karanbir Singh mail-li...@karan.orgwrote:

 Use this as a reference platform, eg if you can get 25% better performance
 by moving all your static content ( eg. graphics, css, js ) into a tempfs,
 that will translate to a about the same level[1] boost in production as
 well.


What are the benefits using these static media in tempfs (in WebServer
Environment)? Can you please show some examples/cases? Anyhow, as I know,
modern servers load the content in memory, delivers it to client, and keeps
the cache in RAM for reducing disk level IO.. This can clearly be seen, a
long running apache just fills more and more RAM even if the load reduces.

If we use tempfs, we are keeping just copies of static content in memory -
one sure wastage of precious memory.

Assuming (and probably its fact), when your site gets popularity,  your
Static Media content collection will become huge with the time. So, I don't
think, tempfs should really be in scene.


If you have a quad core, perhaps locking down 2 cpu cores and dedicating
 them for the mysqldb might be a good idea.


This is a good idea, again if you are using same system for WebServer and
Database server. But keep them apart, life would be much easier (and cost
would be little expensive).

Similarly, separating the i/o for http and mysqld at the block device level
 might be something you want to look into.


What you meant by this? Do you want to say Keeping them in different
partitions or disks would be beneficial?.


 These are just some generic options you can look at - before you move into
 app level optimisations. And there are many many things that could be done
 there as well. Simple things like - are you running the webroots with atime
 enabled / ensure that db index's are valid and the querylog isnt reporting
 runtimes greater than a second for most things

 Hope this helps and gets you thinking along a more formalised path.


My suggestions would be take the same route, what the others have taken,
and is already proved. Read mysql performance blog for tuning mysql. Then
read the case-studies and tricks on highscalability.com
You will find them really nice and proven, although you won't need that
much, but you will get insights.


Now, if I have to do something like this, I start with Browser. I
install *Firebugplugin
+ *Mozilla Firefox, enable the Net section, and refresh/load my webapp page.
It shows how much data is recieved, and what are those data.

Most static contents are then put in a rule to not expire, so browsers
won't hit the server for same data. Take an example, jQuery javascript
plugin is around 60KB (minimized version). But 60KB is too much for me. I
enable GZip compression on the webserver, so when a browser sends requests
with header saying it knows/supports GZip compression, my server just
compresses the JS (or any content which can be compressed - more effectively
texts). The size becomes too less, around 16KB (in transfer). When browser
gets it, it decompresses and presents to the page. That saves bandwidth, and
a lot time.

Now, since I am not going to update my jQuery, CSS, JS and images (which are
static to site), i write the rule in apache config to deliver them with
expiry time set in months. So next time browser wont hit my server for
getting the jQuery or other expensive media files.

After that - I look into either DB or App level for tuning. But if I have to
use the caching, I use it from the beginning of development. And what I put
in cache (like memcache) -

   - Rendered page blocks : Today we use Template languages in every
   framework. They are comparitively slow, takes a lot cpu in rendering the
   content. So good idea would be to save those cpu ticks :)
   I just get my templates rendered and push that in cache server, giving
   expiry time depending on how dynamic presentation is. Even if I give 5 mins
   for Homepage template blocks, it saves a lot CPU.
   - I enable slow_db_logging in mysql config, so taht I can get those
   queries which takes more than 1 sec (and 1 sec too is really evil). Getting
   all those queries, I try to find how to optimize, either at application
   level, or query level or db level.
   - In application, when I fetch a query result, I store them in memcache
   server. When I change something at users-request to that table, I do
   invalidate those cached result. That way, data is always fresh till you
   follow the rule One Entry and Failure point at application level.
   - I prefer only those servers for delivering my static contents which are
   based on libevent. That are really fast in processing multiple requests
   concurrently.

But in Tanveer's case, he is using vBulletine, a ready made package. And he
won't want to spend his time in fine tuning at application level. So he need
to play at hardware, server (apache + mysql) and content delivery by other
server like nginix. He can also search Google to find way to attach caching
server into 

Re: [ilugd] Open source Web Server load monitoring tools

2010-03-24 Thread Ravi Kumar
On Wed, Mar 24, 2010 at 10:56 PM, Karanbir Singh mail-li...@karan.orgwrote:

 On 03/24/2010 10:02 PM, Ravi Kumar wrote:

 If we use tempfs, we are keeping just copies of static content in memory
 - one sure wastage of precious memory.


 erm, no - you seem confused about exactly how fs cache and tmpfs works.


Please enlighten me. :) And, I didn't said anything about fs cache... When
you contradict or say something is wrong, it would really be nice if you
supply reason, facts, any pointers etc to assert your views rather pointing
that I am wrong and saying no reason.

I really would like to know why tmpfs would be the corner to look in for
Webserver performance gains.



  Assuming (and probably its fact), when your site gets popularity,  your
 Static Media content collection will become huge with the time. So, I
 don't think, tempfs should really be in scene.


 thats again completely wrong.

 even extremely popular sites like smugmug or twitter have just a few Megs
 of static content.

Lets says you are right, then -
For these cases, the word Exception is used :) ? Don't you agree. If you
put some generalization, others should not disagree because they found one
or two of cases where generalization failed. I didn't stated the Universal
Truth.

But in real -
Twitter doesnt let its user upload/showcase their media/pictures etcs with
their tweets. But users have ability to upload and change their background
which can be any picture less than 800KB in size. And Twitter has more than
350,000 users (as per highscalability.com). Even 1% users uploaded content
makes a huge contribution to their static media size. Twitter started using
Amazon AWS as CDN. It clearly proves they have a good amount of static
media.



 besides if you are getting over a million hits/hour - I am sure you can
 afford a decent sysadmin and a few more machines. If not, you are doing it
 wrong :)

 - KB


There are many ways to optimize the Website. I didn't said what ever you
pointed is wrong.  There are optimization areas where you focus to gain
most. But you have to decide where to look first and give priority. You
probably won't spend 80% of time to achieve 20% performance gain, by just
avoiding or giving less priority to ways for 80% performance gains in 20%
time :) ...


Read this, http://developer.yahoo.com/performance/rules.html
This is really a nice information. Everything well said and tested and used
in real life rather than theories.

-- 
-=Ravi=-
___
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd


[ilugd] Linux terminal server

2010-03-24 Thread Varun Mittal
Please suggest the best way to setup a terminal server without using the dhcp 
server???


___
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd