On 4/21/21 10:21 AM, Andrei Borzenkov wrote:
If I set the stickiness to 100 then it's a race condition, many times we
get the storage layer migrated without VirtualDomain noticing, but if
the stickiness is not set, then moving a resource causes the cluster to
re-balance and will cause the VM to fail every time because validation
is one of the first things we do when we migrate the VM, and it's at the
same time as a IP-ZFS-iSCSI move so the config file goes away for 5
seconds.
I'm not sure how to fix this. The nodes don't have local storage that
Your nodes must have operating system and pacemaker stack loaded from
somewhere before they can import zfs pool.
Yup, and they do. There are plenty of ways to do this: internal SD
card, usb boot, pxe boot, etc.... I prefer this because I don't need to
maintain a boot drive, the nodes boot from the exact same image, and I
have gobs of memory so the running system can run in a ramdisk. This
also makes it possible to boot my nodes with failed disks/controllers
which makes troubleshooting easier. I basically made a live CD distro
that has everything I need.
I suppose the next step is to see if NFS has some sort of retry mode so
That is what "hard" mount option is for.
Thanks, I'll take a look.
Matt
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/