Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, April 16, 2021 4:40 AM, Vojtech Juranek <vjura...@redhat.com> wrote:

> Hi,
> another approach you may consider is to use HA functionality provided by oVirt
> which re-create VM on another host if original host become unavailable and
> attach the disk from underlying Gluster (which is already HA) to newly created
> VM, so you don't have to care about creating HA storage on your VMs. You also
> save some resources as you don't have to have backup VMs running.

This thought did occur to me as well. However, the customer is pretty adamant 
that they would like to reduce downtime as much as possible, and I knew that 
the HA feature built into oVirt meant there could be some downtime while a VM 
gets killed and moved from 1 host to another. Don't get me wrong - I'll still 
mark all 3 of the VMs as HA inside of oVirt. 


But we're using Cloudflare for the load balancing. And we're going to push two 
of the VMs through 1 of our datacenter uplinks, and the 3rd VM through the 2nd 
uplink, so that our routing will be HA for this customer as well.

At this point, I'm strongly leaning towards just a simple rsync via cron, that 
will run every minute (or perhaps every 5 minutes) or something.
But yeah, I'm pretty disappointed that the shared disk approach isn't supported 
with the current Gluster implementation. 


I'm sure that there are technical issues involved with that decision, but I'm 
wondering if that could be a reasonable feature request for future versions of 
oVirt?


> The drawback of this approach that re-creating new VM takes more time than
> switching the traffic to already running VM on load balancer. This depends on
> your constraints but IMHO worth to consider.
> 

> Vojta
> 

> On Friday, 16 April 2021 03:00:23 CEST David White via Users wrote:
> 

> > I'm currently thinking about just setting up a rsync cron to run every
> > minute.
> > Sent with ProtonMail Secure Email.
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > On Thursday, April 15, 2021 8:55 PM, David White via Users users@ovirt.org
> 

> wrote:
> 

> > > > David, I’m curious what the use case is
> > > 

> > > This is for a customer who wants as much high availability as possible for
> > > their website, which relies on a basic LAMP or LNMP stack.
> > > The plan is to create a MariaDB Galera cluster.
> > > Each of the 3 VMs will run MariaDB, as well as Apache or Nginx (I haven't
> > > decided which, yet), and will be able to accept web traffic.
> > > So the website files will need to be the same across all 3 virtual
> > > servers.
> > > My original intent was to setup a mount point on all 3 virtual servers
> > > that mapped back to the same shared disk.
> > > Strahil, 1 idea I had, which I don't think would be ideal at all, was to
> > > setup a separate, new, gluster configuration on each of the 3 VMs.
> > > Gluster virtualized on top of gluster! If that doesn't make your head
> > > spin, what will? But I'm not seriously thinking about that. :)
> > > It did occur to me that I could setup a 4th VM to host the NFS share.
> > > But I'm trying to stay away from as many single points of failures as
> > > possible.
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > > On Thursday, April 15, 2021 7:40 PM, Strahil Nikolov via Users
> 

> users@ovirt.org wrote:
> 

> > > > I know that clusterizing applications (for example corosync/pacemaker
> > > > Active-Passive or even GFS2) require simultaneous access to the data.
> > > > In your case you can create:
> > > > 

> > > > -   2 separate VMs replicating over DRBD and sharing the storage over
> > > >     NFS/iSCSI - Using NFS Ganesha (this is just a theory but should 
> > > > work)
> > > >     to export your Gluster volumes in a redundant and highly available 
> > > > way>
> > > >     

> > 

> > > >     Best Regards,
> > > >     Strahil Nikolov
> > > >     

> > > >     В петък, 16 април 2021 г., 01:56:09 ч. Гринуич+3, Jayme
> > > >     

> 

> jay...@gmail.com написа:
> 

> > > > David, I’m curious what the use case is. :9 you plan on using the disk
> > > > with three vms at the same time? This isn’t really what shareable
> > > > disks are meant to do afaik. If you want to share storage with multiple
> > > > vms I’d probably just setup an nfs share on one of the vms> >
> > > > On Thu, Apr 15, 2021 at 7:37 PM David White via Users users@ovirt.org
> 

> wrote:
> 

> > > > > I found the proper documentation
> > > > > at https://www.ovirt.org/documentation/administration_guide/#Shareabl
> > > > > e_Disks. When I tried to edit the disk, I see that sharable is grayed
> > > > > out, and when I hover my mouse over it, I see "Sharable Storage is
> > > > > not supported on Gluster/Offload Domain". So to confirm, is there any
> > > > > circumstance where a Gluster volume can support sharable storage?
> > > > > Unfortunately, I don't have any other storage available, and I chose
> > > > > to use Gluster, so that I could have a HA environment. Sent with
> > > > > ProtonMail Secure Email.
> > > > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > > > > On Thursday, April 15, 2021 5:05 PM, David White via Users
> 

> users@ovirt.org wrote:
> 

> > > > > > I need to mount a partition across 3 different VMs.
> > > > > > How do I attach a disk to multiple VMs?
> > > > > > This looks like fairly old
> > > > > > documentation-not-documentation: https://www.ovirt.org/develop/rele
> > > > > > ase-management/features/storage/sharedrawdisk.html Sent with
> > > > > > ProtonMail Secure Email.
> > > > > 

> > > > > Users mailing list -- users@ovirt.org
> > > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > > > oVirt Code of Conduct:
> > > > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > > > Archives:
> > > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZY6OJW
> > > > > BH5KAB5H2XXYJOVI7BLR4Z67F/> >
> > > > > Users mailing list -- users@ovirt.org
> > > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > > > oVirt Code of Conduct:
> > > > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > > > Archives:
> > > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAQI4WTRS
> > > > > PEECHZV3NEQLUMHRCEPUEZ/
> > > > 

> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct:
> > > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > > Archives:
> > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFSDOLZCL
> > > > VEO6NU36C3WOTSONMGPT4EF/>
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct:
> > > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > > Archives:
> > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/3J6KFZM7K2A
> > > > KDZJDISSQBRVSEGIYBPIX/
> 

> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JV7VQ3BK5EUEQRUZKDTDZPUMCB4C2XMZ/

Attachment: publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOO3X3WMCW7VOM3FSAKTEWTGEUWNOOPY/

Reply via email to