pointers on where to look will be great.
Lately, I also found out different NFS client played a significant role in
my testings also, unfortunately...
On Tue, May 9, 2017 at 11:21 PM Soumya Koduri <skod...@redhat.com> wrote:
>
>
> On 05/10/2017 04:18 AM, ML Wong wrote
While I m troubleshooting the failover of Nfs-Ganesha, the failover is
always successful when I shutdown Nfs-Ganesha service online while the OS
is running. However, it always failed when I did a either shutdown -r or
power-reset.
During the failure, the Nfs client was just hung. Like you could
t 11:42 PM, Soumya Koduri <skod...@redhat.com> wrote:
> Hi,
>
> On 02/03/2017 07:52 AM, ML Wong wrote:
>
>> Hello All,
>> Any pointers will be very-much appreciated. Thanks in advance!
>>
>> Environment:
>> Running centOS 7.2.511
>> Gluster: 3.7.16
Though remove-brick is not an usual act we would do for Gluster volume,
this has consistently failed ending in corrupted gluster volume after
Sharding has been turned on. For bug1387878, it's very similar to what i
had encountered in ESXi world. Add-brick, would run successful, but
Have anyone encounter this behavior?
Running 3.7.16 from centos-gluster37, on CentOS 7.2 with NFS-Ganesha 2.3.0.
VMs are running fine without problems and with Sharding on. However, when i
either do a "add-brick" or "remove-brick start force". VM files will then
be corrupted, and the VM will not
I like to ask for some recommendations here.
1) For /usr/libexec/ganesha/ganesha-ha.sh, as we have been taking
advantages of using pacemaker+corosync for some other services, however, we
always run into the issue of losing other resources we setup in the cluster
when we run ganesha-ha.sh
ion (`gluster volume info`)?
>>
>> -Krutika
>>
>> On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N <ravishan...@redhat.com>
>> wrote:
>>
>>> On 09/28/2016 12:16 AM, ML Wong wrote:
>>>
>>> Hello Ravishankar,
>>> Thanks for
Have anyone in the list who has tried copying file which is bigger than the
individual brick/replica size?
Test Scenario:
Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
Each replica has 1GB
When i tried to copy file this volume, by both fuse, or nfs mount. i get
I/O error.
onment: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8
(glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha
-gluster-2.3.0-1.el7.x86_64
On Mon, Mar 14, 2016 at 2:05 AM, Soumya Koduri <skod...@redhat.com> wrote:
> Hi,
>
>
> On 03/14/2016 04:06 AM, ML Wong wrote:
>
>> Run
Running CentOS Linux release 7.2.1511, glusterfs 3.7.8
(glusterfs-server-3.7.8-2.el7.x86_64),
nfs-ganesha-gluster-2.3.0-1.el7.x86_64
1) Ensured the connectivity between gluster nodes by using PING
2) Disabled NetworkManager (Loaded: loaded
(/usr/lib/systemd/system/NetworkManager.service;
10 matches
Mail list logo