Could you enable strict-o-direct and disable remote-dio on the src volume
as well, restart the vms on "old" and retry migration?
# gluster volume set performance.strict-o-direct on
# gluster volume set network.remote-dio off
-Krutika
On Tue, Mar 26, 2019 at 10:32 PM Sander Hoentjen wrote:
>
I don't remember if it still in works
NUFA
https://github.com/gluster/glusterfs-specs/blob/master/done/Features/nufa.md
v
On Tue, Mar 26, 2019 at 7:27 AM Nux! wrote:
> Hello,
>
> I'm trying to set up a distributed backup storage (no replicas), but I'd
> like to prioritise the local bricks for a
All,
Glusterfs cleans up POSIX locks held on an fd when the client/mount through
which those locks are held disconnects from bricks/server. This helps
Glusterfs to not run into a stale lock problem later (For eg., if
application unlocks while the connection was still down). However, this
means the
I tracked down the 2 gfis and it looks like they were "partly?" configured.
I copied the data off the gluster volume they existed on and then
removed the files on the server and recreated them on the client.
Things seem to be sane again but at this point I am not amazingly
confident in the c
On 26-03-19 14:23, Sahina Bose wrote:
> +Krutika Dhananjay and gluster ml
>
> On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote:
>> Hello,
>>
>> tl;dr We have disk corruption when doing live storage migration on oVirt
>> 4.2 with gluster 3.12.15. Any idea why?
>>
>> We have a 3-node oVirt clus
On Tue, Mar 26, 2019 at 11:26:00AM -0500, Darrell Budic wrote:
> Heads up for the Centos storage maintainers, I’ve tested 5.5 on my dev
> cluster and it behaves well. It also resolved rolling upgrade issues in a
> hyperconverged ovirt cluster for me, so I recommend moving it out of testing.
Than
Heads up for the Centos storage maintainers, I’ve tested 5.5 on my dev cluster
and it behaves well. It also resolved rolling upgrade issues in a
hyperconverged ovirt cluster for me, so I recommend moving it out of testing.
-Darrell
> On Mar 21, 2019, at 6:06 AM, Shyam Ranganathan wrote:
>
>
On Tue, Mar 26, 2019 at 6:10 PM Alvin Starr wrote:
>
> After almost a week of doing nothing the brick failed and we were able to
> stop and restart glusterd and then could start a manual heal.
>
> It was interesting when the heal started the time to completion was just
> about 21 days but as it
Please check error message in gsyncd.log file in
/var/log/glusterfs/geo-replication/
On Tue, 2019-03-26 at 19:44 +0530, Maurya M wrote:
> Hi Arvind,
> Have patched my setup with your fix: re-run the setup, but this time
> getting a different error where it failed to commit the ssh-port on
> my ot
Hi Arvind,
Have patched my setup with your fix: re-run the setup, but this time
getting a different error where it failed to commit the ssh-port on my
other 2 nodes on the master cluster, so manually copied the :
*[vars]*
*ssh-port = *
into gsyncd.conf
and status reported back is as shown be
+Krutika Dhananjay and gluster ml
On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen wrote:
>
> Hello,
>
> tl;dr We have disk corruption when doing live storage migration on oVirt
> 4.2 with gluster 3.12.15. Any idea why?
>
> We have a 3-node oVirt cluster that is both compute and gluster-storage.
>
After almost a week of doing nothing the brick failed and we were able
to stop and restart glusterd and then could start a manual heal.
It was interesting when the heal started the time to completion was just
about 21 days but as it worked through the 30 some entries it got
faster to the p
Hello,
I'm trying to set up a distributed backup storage (no replicas), but I'd like
to prioritise the local bricks for any IO done on the volume.
This will be a backup stor, so in other words, I'd like the files to be written
locally if there is space, so as to save the NICs for other traffic.
I got chance to investigate this issue further and identified a issue
with Geo-replication config set and sent patch to fix the same.
BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666
Patch: https://review.gluster.org/22418
On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:
> ran this comm
14 matches
Mail list logo