John,

Thanks for your reply, and taking the trouble to grab those measurements. My
own HPC cluster is only in my head (where it performs poorly!) so I can't
return the favor directly. But like many good answers we can narrow down
some discussion/thinking points.

I'll spare you stories of booting SVr4 on 512K :-) -- it didn 't work
well, "vi temp" hung, I had to get expansion memory for my 286. Of course
unices can be made arbitrarily tiny, but my feeling is that 20% of RAM is
not unreasonable these days. However, we'd want to think about what
resources are consumed (presumably, disk?) or functionality lost
(presumably, none?) when the OS does not have the RAM to resize itself
larger (i.e., what is the cost of using only 153MB when only 512 is
available, and it would use 400 if it could get it? presumably disk
paging?).

This ties into the related item, thanks very much for introducing it into
the thread, of disk usage. This would tie the footprint question into
broader networking issues in the case of diskless compute nodes (the virtual
memory expected by the OS is remote) and the disk usage time for diskful
nodes.

I hope someone sitting next to an actual terminal to an actual cluster can
handily post something similar for comparison. Of course it's not
apples-to-apples; unless someone with a Beowulif of Macs wants to chime in
:-) But the comparison can make a table-of-contents for a discussion.

Engineers build things with constraints: climate, terrain, budget, stresses,
loads....and their employer's business model. I have no animosity to
engineers building things with constraints.

Peter


On 4/3/07, John Vert <[EMAIL PROTECTED]> wrote:

 It depends how you measure the size. Here are some measurements I just
made.



On a 2GB RAM machine out of the box, a single process can get a 1.6GBminimum 
working set by calling SetProcessWorkingSetSize(). (
1.6GB of memory resident and available for use without page faulting). So
you might say the OS is 400MB. But on a 1GB machine, you can get 750MB, so
you might say the OS is 250MB. And on a 512MB machine you can get 359MB, so
now the OS is 153MB. In all cases you could probably get a bit more
depending on what your process is doing and whether you want to tweak the
configuration any. (e.g. if you're not doing cached file I/O, the file
cache will shrink under memory pressure)



The disk footprint of the OS is about 1.75GB plus another 2GB for the
paging file.



John Vert

Development Manager

High Performance Computing



*From:* Peter St. John [mailto:[EMAIL PROTECTED]
*Sent:* Tuesday, April 03, 2007 1:49 PM
*To:* John Vert
*Cc:* Robert G. Brown; [email protected]; Bill Bryce
*Subject:* Re: [Beowulf] Win64 Clusters!!!!!!!!!!!!



John,

 Thank you for

"...

4. If you want to learn more about Windows HPC clusters, I recommend
checking out www.microsoft.com/hpc and www.windowshpc.net . I'm also happy
to answer questions on this list, but frankly the S/N ratio tends to drop
dramatically as soon as someone mentions Windows or Microsoft.
..."



How large is the OS on an idle (but ready) compute node in MS's cluster
system? Reasonable possible responses would include that the question is
ill-posed. I would appreciate any response.



Here's for d/dt (S/N) > 0

Thanks,

Peter

_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to