Yes! Using two machines and GlusterFS for instance, is an easier way to achieve this. ( First of all you need create a cluster over Proxmox: https://pve.proxmox.com/wiki/Cluster_Manager) Just make two folders, like /DATA, in each server. Make sure this folder is separated into HDD, rather into just one HDD! Then, follow the instructions here to install and upgrade glusterfs: https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/ (make sure you choose buster!) Install gluster server: apt install glusterfs-server. Now, make sure that you have a separated NIC to make all the network traffic run over this NIC. Use some private network address. After installing gluster server do this in the first node: gluster peer probe server1 Then use this command to create a replica 2 glusterfs server: gluster vol create VMS replica 2 server1:/DATA/vms server2:/DATA/vms (make sure server1 and server2 are in /etc/hosts with correspondent private IP) Make the vol VMS up: gluster vol start VMS Then add this do the /etc/fstab in server1: server1:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=server2 0 0 And add this do the /etc/fstab in server2: server2:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=server1 0 0
Add this in each server: file: /etc/systemd/system/glusterfsmounts.service: [Unit] Description=Glustermounting Requires=glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list ExecStart=/bin/mount -a -t glusterfs Restart=on-failure RestartSec=3 [Install] WantedBy=multi-user.target then you would: systemctl daemon-reload systemctl enable glusterfsmounts This will make sure the system mounts the /vms directory, after a reboot. You need apply some tricks here: gluster vol set VMS cluster.heal-timeout 5 gluster volume heal VMS enable gluster vol set VMS cluster.quorum-reads false gluster vol set VMS cluster.quorum-count 1 gluster vol set VMS network.ping-timeout 2 gluster volume set VMS cluster.favorite-child-policy mtime gluster volume heal VMS granular-entry-heal enable gluster volume set VMS cluster.data-self-heal-algorithm full After all that, go to Datacenter -> Storage -> Directory and add the /vms as a directory storage in your PROXMOX. Remember to mark as shared storage. I have used this set up for many months now and so far no issues. But the most clever way is to keep backup, right? Cheers --- Gilberto Nunes Ferreira Em seg., 30 de nov. de 2020 às 14:10, Leandro Roggerone <[email protected]> escreveu: > > Alejandro , thanks for your words. > Let me explain: > About live migration ... yes I think this is what I need to achieve. > So basically you can "drag and drop" VMs from one node to another ? > > What do I need to achieve this ? / Only have one node. > My current pve bos is in production with very important machines running on > it. > I will add a second pve server machine soon. > But , I dont have any network storage , so the question would be: > Having two pve machines (already running and fresh one) is it possible to > perform live migrations? > or is it mandatory to have an intermediate hardware or something like that? > > Regards, > Leandro. > > > > > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> > Libre > de virus. www.avast.com > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> > <#m_-8318222231522076233_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > El lun, 30 nov 2020 a las 13:45, Alejandro Bonilla via pve-user (< > [email protected]>) escribió: > > > > > > > > > ---------- Forwarded message ---------- > > From: Alejandro Bonilla <[email protected]> > > To: Proxmox VE user list <[email protected]> > > Cc: > > Bcc: > > Date: Mon, 30 Nov 2020 16:45:29 +0000 > > Subject: Re: [PVE-User] moving machines among proxmoxs servers. > > > > > > > On Nov 30, 2020, at 11:21 AM, Leandro Roggerone < > > [email protected]> wrote: > > > > > > Hi guys. > > > Just wondering if is it possible to move machines without outage ? > > > > I thought at first you were referring to a live-migration which is easy to > > achieve > > > > 64 bytes from 10.0.0.111: icmp_seq=25 ttl=64 time=0.363 ms > > 64 bytes from 10.0.0.111: icmp_seq=26 ttl=64 time=0.397 ms > > 64 bytes from 10.0.0.111: icmp_seq=27 ttl=64 time=0.502 ms > > Request timeout for icmp_seq 28 > > 64 bytes from 10.0.0.111: icmp_seq=29 ttl=64 time=0.366 ms > > 64 bytes from 10.0.0.111: icmp_seq=30 ttl=64 time=0.562 ms > > 64 bytes from 10.0.0.111: icmp_seq=31 ttl=64 time=0.469 ms > > > > And certainly happens with little to no outage. > > > > > What do I need to achieve this ? > > > > More than one node (cluster), storage, then perform a migration… using > > storage like Ceph will make the migration way faster. > > > > > Currently have only one box ... > > > > And then I got confused. Are you trying to migrate from another hypervisor > > or you are just asking if it’s possible at all and then would add another > > box? > > > > > Thanks. > > > Leandro. > > > > > > > > > > > > > ---------- Forwarded message ---------- > > From: Alejandro Bonilla via pve-user <[email protected]> > > To: Proxmox VE user list <[email protected]> > > Cc: Alejandro Bonilla <[email protected]> > > Bcc: > > Date: Mon, 30 Nov 2020 16:45:29 +0000 > > Subject: Re: [PVE-User] moving machines among proxmoxs servers. > > _______________________________________________ > > pve-user mailing list > > [email protected] > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> > Libre > de virus. www.avast.com > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > pve-user mailing list > [email protected] > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list [email protected] https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
