On Thu, 2003-06-05 at 11:23, Soren Harward wrote: > Hmm, I question the choice of hardware. Multiprocessing for an NFS box > is either a) pointless or b) detrimental. Why? Because NFS isn't > multithreaded. knfsd will primarily run on one processor, and if the > kernel *does* switch it from one processor to another, you have to re-load > the cache on the other processor with the knfsd code, which is a pretty > big hit. For NFS, if the choice is between one hefty MP 8-rack-unit > machine and 8 SP 1-rack-unit machines, the latter will be your better > choice. Segment your file system, which will make Linux happier anyway. > Sure, it's a few more lines in the automount configuration, but that's > less of a headache than one slow NFS server.
Although nfsd is not multithreaded, it does spawn x number of child processes to handle requests. Thus if you have nfsd processes hogging the processor, the other processes run just fine on the other cpus. So it's not an issue of multi-threading. It's multi-processing, which the hyperthreading cpu can do just fine. When I was in CS, we ran the main file server with 32 nfsd server processes. A segmented file system sounds good in theory until you start running out of space on one segment. How do you move data around in a transparent way? You can't. It's like partitioning your linux install. Do you want a billion little partitions or one big one. There are obviously reasons for both ideas. Until we have a good, reliable distributed file system (trying afs, but it's got probs), I'll always recommend the larger, singular units. > Now for multithreaded applications like samba, apache, or slapd, then MP > machines are just fine because they're contiually forking new processes > (or as it were, threads) for each connection. Thus, on an SP box, the > CPU cache gets re-filled anyway. So you may as well split the load > between a few different processors. > > Something I've come to believe over the last year is that a large > cluster of small, one-use servers is a lot easier to manage than a small > cluster of large, multi-use servers. There are still some applications > where the reverse is true (like enterprise-size databases or some > supercomputing applications), but your general internet services run > much better when split between a handful of machines. The other > advantage is that if one goes down, then it's trivial to just nuke it > and copy a complete image over from a working server, rather than > restoring everything from backup and bringing each subsystem back on > line. True. There must always be tradeoffs, though. And also, here at byu we have to keep the bean counters happy and these guys are not tech savvy and to them it's wasteful to do one task per machine. So there is a lot of pressure to justify every machine and it's tasks and to consolidate. Michael -- Michael L Torrie <[EMAIL PROTECTED]> ____________________ BYU Unix Users Group http://uug.byu.edu/ ___________________________________________________________________ List Info: http://uug.byu.edu/cgi-bin/mailman/listinfo/uug-list
