Re: [Gluster-users] Is there difference when Nfs-Ganesha is unavailable

2017-05-10 Thread ML Wong
pointers on where to look will be great. Lately, I also found out different NFS client played a significant role in my testings also, unfortunately... On Tue, May 9, 2017 at 11:21 PM Soumya Koduri <skod...@redhat.com> wrote: > > > On 05/10/2017 04:18 AM, ML Wong wrote

[Gluster-users] Is there difference when Nfs-Ganesha is unavailable

2017-05-09 Thread ML Wong
While I m troubleshooting the failover of Nfs-Ganesha, the failover is always successful when I shutdown Nfs-Ganesha service online while the OS is running. However, it always failed when I did a either shutdown -r or power-reset. During the failure, the Nfs client was just hung. Like you could

Re: [Gluster-users] Strange - Missing hostname-trigger_ip-1 resources

2017-02-03 Thread ML Wong
t 11:42 PM, Soumya Koduri <skod...@redhat.com> wrote: > Hi, > > On 02/03/2017 07:52 AM, ML Wong wrote: > >> Hello All, >> Any pointers will be very-much appreciated. Thanks in advance! >> >> Environment: >> Running centOS 7.2.511 >> Gluster: 3.7.16

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-14 Thread ML Wong
Though remove-brick is not an usual act we would do for Gluster volume, this has consistently failed ending in corrupted gluster volume after Sharding has been turned on. For bug1387878, it's very similar to what i had encountered in ESXi world. Add-brick, would run successful, but

[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-11 Thread ML Wong
Have anyone encounter this behavior? Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha 2.3.0. VMs are running fine without problems and with Sharding on. However, when i either do a "add-brick" or "remove-brick start force". VM files will then be corrupted, and the VM will not

[Gluster-users] Understandings of ganesha-ha.sh

2016-11-04 Thread ML Wong
I like to ask for some recommendations here. 1) For /usr/libexec/ganesha/ganesha-ha.sh, as we have been taking advantages of using pacemaker+corosync for some other services, however, we always run into the issue of losing other resources we setup in the cluster when we run ganesha-ha.sh

Re: [Gluster-users] File Size and Brick Size

2016-09-30 Thread ML Wong
ion (`gluster volume info`)? >> >> -Krutika >> >> On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N <ravishan...@redhat.com> >> wrote: >> >>> On 09/28/2016 12:16 AM, ML Wong wrote: >>> >>> Hello Ravishankar, >>> Thanks for

[Gluster-users] File Size and Brick Size

2016-09-26 Thread ML Wong
Have anyone in the list who has tried copying file which is bigger than the individual brick/replica size? Test Scenario: Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas Each replica has 1GB When i tried to copy file this volume, by both fuse, or nfs mount. i get I/O error.

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-20 Thread ML Wong
onment: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha -gluster-2.3.0-1.el7.x86_64 On Mon, Mar 14, 2016 at 2:05 AM, Soumya Koduri <skod...@redhat.com> wrote: > Hi, > > > On 03/14/2016 04:06 AM, ML Wong wrote: > >> Run

[Gluster-users] nfs-ganesha volume null errors

2016-03-13 Thread ML Wong
Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86_64 1) Ensured the connectivity between gluster nodes by using PING 2) Disabled NetworkManager (Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service;