Re: [Gluster-users] [ovirt-users] Re: VM disk corruption with LSM on Gluster

2019-03-26 Thread Krutika Dhananjay
Could you enable strict-o-direct and disable remote-dio on the src volume as well, restart the vms on "old" and retry migration? # gluster volume set performance.strict-o-direct on # gluster volume set network.remote-dio off -Krutika On Tue, Mar 26, 2019 at 10:32 PM Sander Hoentjen wrote: >

Re: [Gluster-users] Prioritise local bricks for IO?

2019-03-26 Thread Vlad Kopylov
I don't remember if it still in works NUFA https://github.com/gluster/glusterfs-specs/blob/master/done/Features/nufa.md v On Tue, Mar 26, 2019 at 7:27 AM Nux! wrote: > Hello, > > I'm trying to set up a distributed backup storage (no replicas), but I'd > like to prioritise the local bricks for

[Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-26 Thread Raghavendra Gowdappa
All, Glusterfs cleans up POSIX locks held on an fd when the client/mount through which those locks are held disconnects from bricks/server. This helps Glusterfs to not run into a stale lock problem later (For eg., if application unlocks while the connection was still down). However, this means

Re: [Gluster-users] recovery from reboot time?

2019-03-26 Thread Alvin Starr
I tracked down the 2 gfis and it looks like they were "partly?" configured. I copied the data off the gluster volume  they existed on and then removed the files on the server and recreated them on the client. Things seem to be sane again but at this point I am not amazingly confident in the

Re: [Gluster-users] [ovirt-users] Re: VM disk corruption with LSM on Gluster

2019-03-26 Thread Sander Hoentjen
On 26-03-19 14:23, Sahina Bose wrote: > +Krutika Dhananjay and gluster ml > > On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote: >> Hello, >> >> tl;dr We have disk corruption when doing live storage migration on oVirt >> 4.2 with gluster 3.12.15. Any idea why? >> >> We have a 3-node oVirt

Re: [Gluster-users] [Gluster-Maintainers] Announcing Gluster release 5.5

2019-03-26 Thread Niels de Vos
On Tue, Mar 26, 2019 at 11:26:00AM -0500, Darrell Budic wrote: > Heads up for the Centos storage maintainers, I’ve tested 5.5 on my dev > cluster and it behaves well. It also resolved rolling upgrade issues in a > hyperconverged ovirt cluster for me, so I recommend moving it out of testing.

Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-26 Thread Darrell Budic
Heads up for the Centos storage maintainers, I’ve tested 5.5 on my dev cluster and it behaves well. It also resolved rolling upgrade issues in a hyperconverged ovirt cluster for me, so I recommend moving it out of testing. -Darrell > On Mar 21, 2019, at 6:06 AM, Shyam Ranganathan wrote: >

Re: [Gluster-users] recovery from reboot time?

2019-03-26 Thread Sankarshan Mukhopadhyay
On Tue, Mar 26, 2019 at 6:10 PM Alvin Starr wrote: > > After almost a week of doing nothing the brick failed and we were able to > stop and restart glusterd and then could start a manual heal. > > It was interesting when the heal started the time to completion was just > about 21 days but as it

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-26 Thread Aravinda
Please check error message in gsyncd.log file in /var/log/glusterfs/geo-replication/ On Tue, 2019-03-26 at 19:44 +0530, Maurya M wrote: > Hi Arvind, > Have patched my setup with your fix: re-run the setup, but this time > getting a different error where it failed to commit the ssh-port on > my

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-26 Thread Maurya M
Hi Arvind, Have patched my setup with your fix: re-run the setup, but this time getting a different error where it failed to commit the ssh-port on my other 2 nodes on the master cluster, so manually copied the : *[vars]* *ssh-port = * into gsyncd.conf and status reported back is as shown

Re: [Gluster-users] [ovirt-users] VM disk corruption with LSM on Gluster

2019-03-26 Thread Sahina Bose
+Krutika Dhananjay and gluster ml On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote: > > Hello, > > tl;dr We have disk corruption when doing live storage migration on oVirt > 4.2 with gluster 3.12.15. Any idea why? > > We have a 3-node oVirt cluster that is both compute and gluster-storage.

Re: [Gluster-users] recovery from reboot time?

2019-03-26 Thread Alvin Starr
After almost a week of doing nothing the brick failed and we were able to stop and restart glusterd and then could start a manual heal. It was interesting when the heal started the time to completion was just about 21 days but as it worked through the 30 some entries it got faster to the

[Gluster-users] Prioritise local bricks for IO?

2019-03-26 Thread Nux!
Hello, I'm trying to set up a distributed backup storage (no replicas), but I'd like to prioritise the local bricks for any IO done on the volume. This will be a backup stor, so in other words, I'd like the files to be written locally if there is space, so as to save the NICs for other traffic.

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-26 Thread Aravinda
I got chance to investigate this issue further and identified a issue with Geo-replication config set and sent patch to fix the same. BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Patch: https://review.gluster.org/22418 On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote: > ran this