Hi David,
I hope you manage to recover the VM or most of the data. If you got multiple
disks in that VM (easily observeable in oVirt UI), you might need to repeat
that again for the rest of the disks.
Check with xfs_info the inode size (isize), as the default used to be 256, but
I have noticed t
Thank you for all the responses.
Following Strahil's instructions, I *think* that I was able to reconstruct the
disk image. I'm just waiting for that image to finish downloading onto my local
machine, at which point I'll try to import into VirtualBox or something.
Fingers crossed!
Worst case sc
If you manage to export the disk image via the GUI, the result should be a
qcow2 format file, which you can mount/attach to anything Linux (well, if the
VM was Linux... it didn't say)
But it's perhaps easier to simply try to attach the disk of the failed VM as a
secondary to a live VM to recove
First off, I have very little hope, you'll be able to recover your data working
at gluster level...
And then there is a lot of information missing between the lines: I guess you
are using a 3 node HCI setup and were adding new disks (/dev/sdb) on all three
nodes and trying to move the glusterfs
*should be 2
On Thu, Aug 5, 2021 at 7:42, Strahil Nikolov wrote:
when you use 'remove-brick replica 1', you need to specify the removed bricks
which should be 1 (data brick and arbiter).Something is mising in your
description.
Best Regards,Strahil Nikolov
On Thu, Aug 5, 2021 at 7:3
when you use 'remove-brick replica 1', you need to specify the removed bricks
which should be 1 (data brick and arbiter).Something is mising in your
description.
Best Regards,Strahil Nikolov
On Thu, Aug 5, 2021 at 7:33, Strahil Nikolov via Users
wrote: ___
First of all you diddn't 'mkfs.xfs -i size=512' . You just 'mkfs.xfs' , whis is
not good and could have caused your VM problems. Also , check with xfs_info the
isize of the FS.
You have to find the uuid of the disks of the affected VM.Then go to the
removed host,and find that file -> this is the
Hi Patrick,
This would be amazing, if possible.
Checking /gluster_bricks/data/data on the host where I've removed (but not
replaced) the bricks, I see a single directory.
When I go into that directory, I see two directories:
dom_md
images
If I go into the images directory, I think I see the has
Greetings, I once wondered how data is stored between replicated bricks.
Specifically, how disks are stored on the storage domain in Gluster. I checked
a mounted brick via the standard path (path may be different)
/gluster/data/data and saw many directories there. Maybe the hierarchy is
differe
9 matches
Mail list logo