> So - I did a redneck test instead - dd 64MB of /dev/zero to a file on the
> mounted partition.
>
> On writes, NFS gets 4.4MB/s, GlusterFS (server side AFR) gets 4.6MB/s.
> Pretty even.
> On reads GlusterFS gets 117MB/s, NFS gets 119MB/s (on the first read after
> flushing the caches, after that it goes up to 600MB/s). The difference in
> the unbuffered readings seems to be in the sane ball park and the difference
> on the reads is roughly what I'd expect considering NFS is running UDP and
> GLFS is running TCP.
>
> So in conclusion - there is no performance difference between them worth
> speaking of. So what is the point in implementing a user-space NFS handler
> in glusterfsd when unfsd seems to do the job as well as glusterfsd could
> reasonably hope to?

Can you clarify if your tests had a setup where the NFS re-export
would result in 2 hops for IO? From what you have shown, it looks like
both the tests had just one (physical) network hop (not considering
loopback). The need for an NFS xlators becomes apparent when you want
to re-export a distributed glusterfs configuration via NFS which can
result in > 1 network hops for IO. Context switches between these two
hops makes things considerably worse, and having NFS xlator inside
glusterfs makes it use the caches in the performance translators very
effectively.

Avati


_______________________________________________
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel

Reply via email to