Daniel Ouellet wrote:
Henning Brauer wrote:
* Joachim Schipper <[EMAIL PROTECTED]> [2007-04-20 14:49]:
On Fri, Apr 20, 2007 at 12:36:29PM +0200, Henning Brauer wrote:
* Joachim Schipper <[EMAIL PROTECTED]> [2007-04-20 00:36]:
On Thu, Apr 19, 2007 at 10:51:56PM +0100, Stuart Henderson wrote:
I don't think NFS/AFS is that good an idea; you'll need very beefy
fileservers and a fast network.
NFS may actually be useful; if you really need the files in one
directory space for management/updates that's a way to do it (i.e.
mount all the various storage servers by NFS on a management
station/ftp server/whatever).
Something like that might be a very good idea, yes. Just don't try to
serve everything directly off NFS.
there is nothing wrong with serving directly from NFS.
Really? You have a lot more experience in this area, so I will defer to
you if you are sure, but it seems to me that in the sort of system I
explicitly assumed (something like a web farm), serving everything off
NFS would involve either very expensive hardware or be rather slow.

no. cache works. reads are no problem whatsoever in this kind of setup
(well. I am sure you can make that a problem with many frontend servers and lots to read. obviously. but for any sane number of frontends, should not)

What do you consider a sane number of front ends, 10, less, more?

Well, I think that depends on too many variables. I have a movie server (OBSD) that exports NFS to two home theatre computers (FBSD). The movie server is a dual P3 1GHz with 4 U320 SCSI disks in RAID0. When simultaneously playing different DVDs on the two theatre computers, the movie server is >90% idle; that's with TCP connection. When using UDP mounts it's >96% idle. Although movie files are large sequential data, the bottleneck in my network is my 100Mbs LAN.

Cache, you mean cache on the source NFS, or cache on the client NFS? Sorry, look like I have more questions then answers as I skip NFS a few years ago because of the bottle neck on the NFS transfer. Write was bad, read OK, but not huge.

I dump DVDs to VOB format using mplayer, so the files are 4-8GBs. This eliminates caching in my situation. So if you had many front-ends accessing similar files and caching was taking place, you'd experience greater efficiency. I believe NFS does do some caching server-side, like directory lookups, etc.

Also, when I rip a DVD, it goes straight to the NFS mount. The bottleneck here is my DVD players, which can only read at ~2MB/s. Again, the movie server is almost idle, barely breaking a sweat.

May well be different now, I would be happy with decent read, but what can be excepted.

You can easily saturate a 100Mb LAN with NFS traffic from one NFS server.

The archive is not to nice on the subject I have to say. Always looks like a bottle neck on the NFS side.

I disagree. Too many people try running cheap IDE disks in server environments and then wonder why they have poor performance. They blame the software. Get SCSI; it is made for highly random access, which is what happens when many machines pound on a single logical drive.

If small site, or low traffic, yes that's great, but what can one expect to reach the limits here? Any ideas?

Who knows, just try an experiment. From my experience, the bottlenecks seem to be the local file system (UFS & disk system) of the exporting machine if many clients. Otherwise, it is network bandwidth. NFS seems really light on top of UFS, especially when using UDP. BTW, UDP mounts are very robust when the clients and server are on the same Ethernet segment.

May be it's time for me to revisit this yet again, but never been very succesful with high traffic.

All I can say is that I love NFS. You're missing out. Plus it is so simple. I have wanted to check out AFS for fail-over reasons, but too many docs for me to read.

One last note. Holland's disk structuring is very cool (read his earlier post for details). If I were to serve NFS to dozens or hundreds of clients I would use his scheme, however, apply his partitioning scheme at the host level. If an NFS server is saturated, spread the load by adding another server. The drawback is that each client has multiple NFS mounts. However, if you have this many machines uniformly accessing an NFS array, the entire mounting process should be automated. This is where clever planning takes place.

-pachl

Reply via email to