[Gluster-devel] Possible Performance Tweak for unfsd Users

2010-01-06 Thread Gordan Bobic
I can't figure out why this might be the case, but it would appear that when unfsd is bound to a custom port and not registered with portmap, the performance is massively improved. I changed my init.d/unfsd script as follows, in the start option: - /usr/sbin/unfsd -i ${pidfile} + /usr/sbin/unf

Re: [Gluster-devel] Faster hashing for DHT

2010-01-06 Thread Jeff Darcy
On 01/05/2010 07:56 PM, Martin Fick wrote: > Hmm, if it were collision resistant, wouldn't that mean that you would need > one server for each file you want to store? I suspect you want many > collisions, just a good even distribution of those collisions, "Collision resistance" in this context

Re: [Gluster-devel] Faster hashing for DHT

2010-01-06 Thread Joe Landman
Jeff Darcy wrote: On 01/05/2010 07:56 PM, Martin Fick wrote: Hmm, if it were collision resistant, wouldn't that mean that you would need one server for each file you want to store? I suspect you want many collisions, just a good even distribution of those collisions, "Collision resistance"

Re: [Gluster-devel] Possible Performance Tweak for unfsd Users

2010-01-06 Thread Shehjar Tikoo
Gordan Bobic wrote: I can't figure out why this might be the case, but it would appear that when unfsd is bound to a custom port and not registered with portmap, the performance is massively improved. I changed my init.d/unfsd script as follows, in the start option: - /usr/sbin/unfsd -i ${pidfi

Re: [Gluster-devel] Possible Performance Tweak for unfsd Users

2010-01-06 Thread Gordan Bobic
Shehjar Tikoo wrote: Gordan Bobic wrote: I can't figure out why this might be the case, but it would appear that when unfsd is bound to a custom port and not registered with portmap, the performance is massively improved. I changed my init.d/unfsd script as follows, in the start option: - /usr

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

2010-01-06 Thread Gordan Bobic
Shehjar Tikoo wrote: - "Gordan Bobic" wrote: I'm not sure if this is the right place to ask, but Google has failed me and it is rather related to the upcoming Gluster NFS export feature. What I'm interested in knowing is whether it will be possible to run Gluster NFS export for glfs moun

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

2010-01-06 Thread Gordan Bobic
Martin Fick wrote: --- On Wed, 1/6/10, Gordan Bobic wrote: With native NFS there'll be no need to first mount a glusterFS FUSE based volume and then export it as NFS. The way it has been developed is that any glusterfs volume in the volfile can be exported using NFS by adding an NFS volu

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

2010-01-06 Thread Martin Fick
--- On Wed, 1/6/10, Gordan Bobic wrote: > > With native NFS there'll be no need to first mount a > glusterFS > > FUSE based volume and then export it as NFS. The way > it has been developed is that > > any glusterfs volume in the volfile can be exported > using NFS by adding > > an NFS volume ove

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

2010-01-06 Thread Gordan Bobic
Gordan Bobic wrote: With native NFS there'll be no need to first mount a glusterFS FUSE based volume and then export it as NFS. The way it has been developed is that any glusterfs volume in the volfile can be exported using NFS by adding an NFS volume over it in the volfile. This is something

[Gluster-devel] glusterfs process and deleted file handles

2010-01-06 Thread Gordan Bobic
I'm seeing something peculiar. lsof is showing glusterfs as having deleted file handles open in the backing store. That's fine, those file handles are open by the process running on top of it. But the process running on top of glusterfs isn't showing the same file handle as deleted. That seems

Re: [Gluster-devel] Faster hashing for DHT

2010-01-06 Thread Anand Avati
On Tue, Jan 5, 2010 at 10:42 PM, Jeff Darcy wrote: > While looking at the DHT code, I noticed that it's using a 10-round > Davies-Meyer construction to generate the hashes used for file > placement.  A little surprised, by this, I ran it by a couple of friends > who are experts in both cryptograph

Re: [Gluster-devel] Faster hashing for DHT

2010-01-06 Thread Anand Avati
>> I note that Hsieh's SuperFastHash is already implemented in >> GlusterFS and is used for other purposes.  It's about 3x as fast as the >> DM hash, and has better collision resistance as well.  MurmurHash >> (http://murmurhash.googlepages.com/) is even faster and more collision >> resistant.  For

Re: [Gluster-devel] glusterfs process and deleted file handles

2010-01-06 Thread Anand Avati
On Thu, Jan 7, 2010 at 4:51 AM, Gordan Bobic wrote: > I'm seeing something peculiar. lsof is showing glusterfs as having deleted > file handles open in the backing store. That's fine, those file handles are > open by the process running on top of it. But the process running on top of > glusterfs i

Re: [Gluster-devel] Disconnections and Corruption Under High Load

2010-01-06 Thread Anand Avati
On Tue, Jan 5, 2010 at 5:31 PM, Gordan Bobic wrote: > I've noticed a very high incidence of the problem I reported a while back, > that manifests itself in open files getting corrupted on commit, possibly > during conditions that involve server disconnections due to timeouts (very > high disk load

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

2010-01-06 Thread Anand Avati
> So - I did a redneck test instead - dd 64MB of /dev/zero to a file on the > mounted partition. > > On writes, NFS gets 4.4MB/s, GlusterFS (server side AFR) gets 4.6MB/s. > Pretty even. > On reads GlusterFS gets 117MB/s, NFS gets 119MB/s (on the first read after > flushing the caches, after that i

Re: [Gluster-devel] Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

2010-01-06 Thread Anand Avati
> [2010-01-06 21:35:54] N [server-protocol.c:7065:mop_setvolume] server-home: > accepted client from 10.2.3.1:1023 > [2010-01-06 21:35:54] N [server-protocol.c:7065:mop_setvolume] server-home: > accepted client from 10.2.3.1:1022 > pending frames: > frame : type(1) op(LOOKUP) > frame : type(1) op(L