Hello,

Unused volume is on the new proxmox node. And this Unused logical volumes are 
empty, livemigration of the machine leads to immediate kernel panic and fsck 
(after disconnecting LV on second node) is not able to find any filesystem.


What do you mean by empty? Normally, when linstor creates an additional 
resource, it should initiate a sync from the original resource to the 
destination resource. You cannot live migrate the resource until the sync is 
completed and both resources must show up as UpToDate. Did your check the 
status of the resource on both nodes with drbdtop ?

Are you referring to a kernel panic on the guest VM o/s or on the Proxmox host ?


 Here is interesting, that the allocated space isn't the same on both sides.


I think that a side effect of how ThinLVM works. Meaning , it's  probably a 
case that you have written/modified/deleted data over the time on the original 
resource, so ThinLVM will increase space allocation on the LV (unless you have 
enabled discard option on pve), where on the new volume it will show only the 
actual used data as of now. You have to continuously monitor ThinLVM allocated 
space, just for this very reason. You can easily run onto serious problems, but 
this is not a DRBD or a LINSTOR issue anyways.


the resource files are shared here - https://pastebin.com/V7mjSCar


the status output is


drbdsetup status vm-103-disk-1


first node

vm-103-disk-1 role:Primary
 disk:UpToDate
 pve-virt2 role:Secondary
   peer-disk:UpToDate


second node


vm-103-disk-1 role:Secondary
 disk:UpToDate
 pve-virt1 role:Primary
   peer-disk:UpToDate


Both  resource files and the status output look good...


and the last info, I tried to resync the volume and in dmesg I have this, 
please note, that the amount of data to resync after --discard-my-data is far 
smaller than the volume itself


discard-my-data does not initiate a full sync if that's what you mean. It's 
rather used in situations where split-brain has occurred, so you discard the 
last changes on one side in favour of the other.

If your intention is to initiate a full sync, then consider using "drbdadm 
invalidate" or recreate meta-data on the 2nd resource by using "drbdadm 
create-md --force".


The proxmox nodes are connected and live migration works, but only for the new 
VMs, not the older ones.
That's an indication that you're probably doing something wrong when you are 
trying to manually increase the replicas for the old VMs.
Might be easier for you to backup old VMs and restore them as new VMs  via 
Proxmox GUI ?

Gianni

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to