Julius,

Thanks for the insights. I'm still working out avenues where
performance can be affected significantly based on some related factors, then
I try to measure it. Some of the obvious areas i could think of as of
now are:

a. Disk I/O vs. RAM
b. (Packets sent) vs. number_of_workstations

Currently i'm checking on memory usage only (no performance impact
tests yet), based on the KDE utility, 5 workstations running
OpenOffice (working on a single page text doc)amounts to the following
values (in MB):

Bare minimum (used+shared) - 279 MB (this includes other server processes
                             already) if your limit is this value,
                             disk swapping is likely to occur. If your
                             workstations are very active, your disk
                             channel may saturate.
                             
Cached mem (2hrs from start) -  297MB (this grows over time). As a
     personal rule, i find around 300 MB cached at any given time,
     very satisfactory and fixed disk friendly too ;-). The more you
     can spare for cache the better.

JS>         I always admire people that go about finding things in a
JS> controlled, engeneering way. Then I go and do it quick and sloppy ;-)
JS>

Yes, you get results this way quickly too. Actually, in these tests we
are not concerned about fixed numbers, just practical (qualitative) bench values to
start us off.  However, we owe some degree of detail which is required
of us so we can document findings for others to duplicate.

Best regards,

Phil               mailto:phil@;Intelisoft-phils.com


Server Info:
 - Mandrake 8.1 (no special settings like firewalls, NAT etc. made)
 - OpenOffice.Org 1.0.1
 - SAMBA / Netware connectivity

*** Thursday, October 17, 2002, 9:24:30 PM, you wrote:

JS> Philip,
JS>         I always admire people that go about finding things in a
JS> controlled, engeneering way. Then I go and do it quick and sloppy ;-)
JS>         The quick way to estimate minimum memory need is to run top and
JS> look at actual memory usage. when the buffer size drops below n megabytes,
JS> where n is a value you arrive at by carefully evaluating your data set /
JS> :-) /, it is time to stop removing memory. to put things in perspective,
JS> my data set on the ltsp server for 25 users is about 500MB in 14 days. I'd
JS> hate to go below 200MB of buffers on this server. On another server, where
JS> i run unix and support business activities for 300 users, no graphics, the
JS> data set is 1.2GB. this server can go to 1.4GB buffer cache. the result is
JS> excellent speed - 99.8% of the i/o requests are satisfied from memory.
JS>         a propos removing memory for tests - i would put a bunch of memory
JS> in the server and than run a program that requests memory in large blocks.
JS> that way you don't have to open the box and you can even test on a live
JS> system. julius
JS> p.s. "data set" is the actual data used by programs during the observation
JS> period.



-------------------------------------------------------
This sf.net email is sponsored by: viaVerio will pay you up to
$1,000 for every account that you consolidate with us.
http://ad.doubleclick.net/clk;4749864;7604308;v?
http://www.viaverio.com/consolidator/osdn.cfm
_____________________________________________________________________
Ltsp-discuss mailing list.   To un-subscribe, or change prefs, goto:
      https://lists.sourceforge.net/lists/listinfo/ltsp-discuss
For additional LTSP help,   try #ltsp channel on irc.openprojects.net

Reply via email to