More specifically for HPC, linux seems designed for the desktop, and
for small memory machines.

the only justice I can see in that is that there hasn't been all that much effort to get bigpages widely/easily used. in particular, I don't
see that scheduler or general memory-management issues in linux are
particularly biased for desktop or against HPC.

That's funny, because I've heard people get scared that it was the complete
opposite. That Linux was driven by Big Iron, and that no one cared about
the "little desktop guy" (Con Kolivas is an interesting history example).

Con didn't play the game right - you have to have the right combination of social engineering (especially timing and proactive response) and good tech
kungfoo.  kernel people are biased towards a certain aesthetic that doesn't
punish big-picture redesigns from scratch, but _does_ punish solutions in search of a problem.

so the question is, if you had a magic wand, what would you change in the kernel (or perhaps libc or other support libs, etc)? most of the things I can think of are not clear-cut. I'd like to be able to give better info from perf counters to our users (but I don't think Linux is really in the way). I suspect we lose some performance due to jitter
injected by the OS (and/or our own monitoring) and would like to improve,
but again, it's hard to blame Linux. I'd love to have better options for cluster-aware filesystems. kernel-assisted network shared memory?
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to