Re: [Gluster-users] Gluster 3.3.0 and VMware ESXi 5

2012-06-07 Thread Atha Kouroussis
Hi Fernando, 
thanks for the reply. I'm seeing exactly the same behavior. I'm wondering if it 
somehow has to do with locking. I read here 
(http://community.gluster.org/q/can-not-mount-nfs-share-without-nolock-option/) 
that locking on NFS was not implemented in 3.2.x and it is now in 3.3. I tested 
3.2.x with ESXi a few months ago and it seemed to work fine but the lack of 
granular locking made it a no-go back then.

Anybody care to chime in with any suggestions? Is there a way to revert NFS to 
3.2.x behavior to test?

Cheers,
Atha

On Thursday, June 7, 2012 at 11:52 AM, Fernando Frediani (Qube) wrote:

> Hi Atha,
> 
> I have a very similar setup and behaviour here.
> I have two bricks with replication and I am able to mount the NFS, deploy a 
> machine there, but when I try to Power it On it simply doesn't work and gives 
> a different message saying that it couldn't find some files.
> 
> I wonder if anyone actually got it working with VMware ESXi and can share 
> with us their scenario setup. Here I have two CentOS 6.2 and Gluster 3.3.0.
> 
> Fernando
> 
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Atha Kouroussis
> Sent: 07 June 2012 15:29
> To: gluster-users@gluster.org (mailto:gluster-users@gluster.org)
> Subject: [Gluster-users] Gluster 3.3.0 and VMware ESXi 5
> 
> Hi everybody,
> we are testing Gluster 3.3 as an alternative to our current Nexenta based 
> storage. With the introduction of granular based locking gluster seems like a 
> viable alternative for VM storage.
> 
> Regrettably we cannot get it to work even for the most rudimentary tests. We 
> have a two brick setup with two ESXi 5 servers. We created both distributed 
> and replicated volumes. We can mount the volumes via NFS on the ESXi servers 
> without any issues but that is as far as we can go.
> 
> When we try to migrate a VM to the gluster backed datastore there is no 
> activity on the bricks and eventually the operation times out on the ESXi 
> side. The nfs.log shows messages like these (distributed volume):
> 
> [2012-06-07 00:00:16.992649] E [nfs3.c:3551:nfs3_rmdir_resume] 0-nfs-nfsv3: 
> Unable to resolve FH: (192.168.11.11:646) vmvol : 
> 7d25cb9a-b9c8-440d-bbd8-973694ccad17
> [2012-06-07 00:00:17.027559] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs: 
> 3bb48d69: /TEST => -1 (Directory not empty)
> [2012-06-07 00:00:17.066276] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs: 
> 3bb48d90: /TEST => -1 (Directory not empty)
> [2012-06-07 00:00:17.097118] E [nfs3.c:3551:nfs3_rmdir_resume] 0-nfs-nfsv3: 
> Unable to resolve FH: (192.168.11.11:646) vmvol : 
> ----0001
> 
> 
> When the volume is mounted on the ESXi servers, we get messages like these in 
> nfs.log:
> 
> [2012-06-06 23:57:34.697460] W [socket.c:195:__socket_rwv] 
> 0-socket.nfs-server: readv failed (Connection reset by peer)
> 
> 
> The same volumes mounted via NFS on a linux box work fine and we did a couple 
> of benchmarks with bonnie++ with very promising results.
> Curiously, if we ssh into the ESXi boxes and go to the mount point of the 
> volume, we can see it contents and write.
> 
> Any clues of what might be going on? Thanks in advance.
> 
> Cheers,
> Atha
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org (mailto:Gluster-users@gluster.org)
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster 3.3.0 and VMware ESXi 5

2012-06-07 Thread Atha Kouroussis
Hi everybody,
we are testing Gluster 3.3 as an alternative to our current Nexenta based 
storage. With the introduction of granular based locking gluster seems like a 
viable alternative for VM storage.

Regrettably we cannot get it to work even for the most rudimentary tests. We 
have a two brick setup with two ESXi 5 servers. We created both distributed and 
replicated volumes. We can mount the volumes via NFS on the ESXi servers 
without any issues but that is as far as we can go.

When we try to migrate a VM to the gluster backed datastore there is no 
activity on the bricks and eventually the operation times out on the ESXi side. 
The nfs.log shows messages like these (distributed volume):

[2012-06-07 00:00:16.992649] E [nfs3.c:3551:nfs3_rmdir_resume] 0-nfs-nfsv3: 
Unable to resolve FH: (192.168.11.11:646) vmvol : 
7d25cb9a-b9c8-440d-bbd8-973694ccad17
[2012-06-07 00:00:17.027559] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs: 3bb48d69: 
/TEST => -1 (Directory not empty)
[2012-06-07 00:00:17.066276] W [nfs3.c:3525:nfs3svc_rmdir_cbk] 0-nfs: 3bb48d90: 
/TEST => -1 (Directory not empty)
[2012-06-07 00:00:17.097118] E [nfs3.c:3551:nfs3_rmdir_resume] 0-nfs-nfsv3: 
Unable to resolve FH: (192.168.11.11:646) vmvol : 
----0001


When the volume is mounted on the ESXi servers, we get messages like these in 
nfs.log:

[2012-06-06 23:57:34.697460] W [socket.c:195:__socket_rwv] 0-socket.nfs-server: 
readv failed (Connection reset by peer)


The same volumes mounted via NFS on a linux box work fine and we did a couple 
of benchmarks with bonnie++ with very promising results.
Curiously, if we ssh into the ESXi boxes and go to the mount point of the 
volume, we can see it contents and write.

Any clues of what might be going on? Thanks in advance.

Cheers,
Atha


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users