Thank you for all the responses.
Following Strahil's instructions, I *think* that I was able to reconstruct the 
disk image. I'm just waiting for that image to finish downloading onto my local 
machine, at which point I'll try to import into VirtualBox or something. 
Fingers crossed!

Worst case scenario, I do have backups for that particular VM from 3 months 
ago, which I have already restored onto a new VM.
Losing 3 months of data is much better than losing 100% of the data from the 
past 2-3+ years.

Thank you.

> First of all you diddn't 'mkfs.xfs -i size=512' . You just 'mkfs.xfs' , whis 
> is not good and could have caused your VM problems. Also , check with 
> xfs_info the isize of the FS.

Ok, so right now, my production cluster is operating off of a single brick. I 
was planning on expanding the storage on the 2nd host next week, and adding 
that back into the cluster, and getting the Replica 2, Arbiter 1 redundancy 
working again.

How would you recommend I proceed with that plan, knowing that I'm currently 
operating off of a single brick in which I did NOT specify the size with 
`mkfs.xfs -i size=512?
Should I specify the size on the new brick I build next week, and then once 
everything is healed, reformat the current brick?

> And then there is a lot of information missing between the lines: I guess you 
> are using a 3 node HCI setup and were adding new disks (/dev/sdb) on all 
> three nodes and trying to move the glusterfs to those new bigger disks?

You are correct in that I'm using 3-node HCI. I originally built HCI with 
Gluster replication on all 3 nodes (Replica 3). As I'm increasing the storage, 
I'm also moving to an architecture of Replica 2/Arbiter 1. So yes, the plan was:

1) Convert FROM Replica 3 TO replica 2/arbiter 1
2) Convert again down to a Replica 1 (so no replication... just operating 
storage on a single host)
3) Rebuild the RAID array (with larger storage) on one of the unused hosts, and 
rebuild the gluster bricks
4) Add the larger RAID back into gluster, let it heal
5) Now, remove the bricks from the host with the smaller storage -- THIS is 
where things went awry, and what caused the data loss on this 1 particular VM
--- This is where I am currently ---
6) Rebuild the RAID array on the remaining host that is now unused (This is 
what I am / was planning to do next week)




Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Thursday, August 5th, 2021 at 3:12 PM, Thomas Hoberg <tho...@hoberg.net> 
wrote:

> If you manage to export the disk image via the GUI, the result should be a 
> qcow2 format file, which you can mount/attach to anything Linux (well, if the 
> VM was Linux... it didn't say)
> 

> But it's perhaps easier to simply try to attach the disk of the failed VM as 
> a secondary to a live VM to recover the data.
> 

> Users mailing list -- users@ovirt.org
> 

> To unsubscribe send an email to users-le...@ovirt.org
> 

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SLXLQ4BLQUPBV5355DFFACF6LFJX4MWY/

Attachment: publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZDHKGQES4ZOGGFJIBB46CZEGD647DLZ/

Reply via email to