Could you disable stat-prefetch on the volume and create another vm off
that template and see if it works?
-Krutika
On Fri, Oct 6, 2017 at 8:28 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Any chance of a backup you could do bit compare with?
>
>
>
> Sent from my Windows 10
Yes! thats seems to work
i executed this commande gluster volume advdemo set network.ping-timeout 41 and
then rebooted the host.
Gluster is now starting correctly.
Now i'm going to upgrade the other servers in my cluster
Thanks again
De : Atin Mukherjee
Any chance of a backup you could do bit compare with?
Sent from my Windows 10 phone
From: Mahdi Adnan
Sent: Friday, 6 October 2017 12:26 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Gluster 3.8.13 data corruption
Hi,
We're running Gluster 3.8.13 replica 2 (SSDs), it's used as
Hi,
We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for
oVirt.
Today, we found an issue with one of the VMs template, after deploying a VM
from this template it will not boot, it stuck at mount the root partition.
We've been using this templates for months now and we
Hello Atin,
Please find below the requested informations:
[root@dvihcasc0r ~]# cat /var/lib/glusterd/vols/advdemo/bricks/*
hostname=dvihcasc0r
path=/opt/glusterfs/advdemo
real_path=/opt/glusterfs/advdemo
listen-port=49152
rdma.listen-port=0
decommissioned=0
brick-id=advdemo-client-0
Ouch!
I use a unified UID/GID process. I personally use FreeIPA. It can also be
done with just LDAP or (not recommended for security reasons) NIS+
Baring those, a well-disciplined manual process will work by copying
passwd, group, shadow and gshadow files around to all systems. Create new
users
I have a setup with multiple hosts, each of them are administered
separately. So there are no unified uid/gid for the users.
When mounting a GlusterFS volume, a file owned by user1 on host1 might
become owned by user2 on host2.
I was looking into POSIX ACL or bindfs, but that won't help me much.
Hello,
I have a gluster volume set up using geo-replication on two slaves
however I'm seeing inconsistent status output on the slave nodes.
Here is the status shown by gluster volume geo-replication status on
each node.
[root@foo-gluster-srv3 ~]# gluster volume geo-replication status
MASTER
On 4 October 2017 at 23:34, WK wrote:
> Just so I know.
>
> Is it correct to assume that this corruption issue is ONLY involved if you
> are doing rebalancing with sharding enabled.
>
> So if I am not doing rebalancing I should be fine?
>
That is correct.
> -bill
>
>
>
> On
Hello ,
it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3
configuration. I upgraded the first server and then launched a reboot.
Gluster is not starting. Seems that gluster starts before network layer.
Some logs here:
Thanks
[2017-10-04 15:33:00.506396] I [MSGID:
So I have the root cause. Basically as part of the patch we write the
brickinfo->uuid in to the brickinfo file only when there is a change in the
volume. As per the brickinfo files you shared the uuid was not saved as
there is no new change in the volume and hence the uuid was always NULL in
the
11 matches
Mail list logo